Language selection

Search

The Use and Impact of Facial Recognition Technology Issue Sheets

How FR Technology Works

Key Messages

  • FR technology uses image processing techniques to detect and analyze the biometric features of an individual’s face for the purposes of identification or verification of an individual’s identity.
  • The use of FR involves the collection and processing of highly sensitive personal information.
  • FR is an especially powerful technology because it enables identification and authentication rapidly, at scale, for a relatively low cost, using existing sources of images and videos.
  • Both the ease of deployment and potentially grave impacts on privacy speak to the need for clear regulation to mitigate personal and societal harms.

Background

  • FR works in the following general way:
    • A probe image is collected and inputted into the system
    • The face in the image is converted into a biometric template or “faceprint”
    • The system then compares the faceprint against an existing database of images that have already been converted into faceprints, and calculates the probability of a match
    • When used for identification, FR can be configured to return a list of labelled images that exceed a given threshold of similarity to the probe image

Prepared by: TAD/PRPA


Web Scraping (Publicly Available Personal Information)

Key Messages

  • Web scraping allows quick and easy collection of unprecedented amounts of personal information from publicly accessible websites, including social media.
  • Much of this information is not truly “publicly available”, as defined under PIPEDA, and therefore not exempt from consent requirements.
  • Our investigation into Clearview AI found they used such tools to collect billions of images from various websites in order to create a reference database for its facial recognition software illegally, as requisite consent was not obtained.
  • Experiences such as Clearview show there is a need to avoid broad interpretations of how “publicly available” information can be used without consent, as it could lead to serious harms. Our privacy laws must ensure that an individual’s reasonable expectations are taken into consideration in determining whether information is “publicly available”.

Background

  • Web Scraping is the extraction of data from websites – generally via software to automatically browse the web and collect data. Web crawlers navigate through websites, creating copies of data they access, without requiring the permission of a website’s operator.
  • While website operators can employ various anti-scraping techniques to identify and block automated crawlers, most of these countermeasures can be defeated.
  • As an example, Clearview AI conducted its scraping activities in contravention of terms of service of a variety of websites. Despite receiving cease-and-desist requests from Google, Facebook and Twitter, it continues to scrape images from these websites, stating that it believes it has “the right to do so”.

Prepared by: Compliance


Public Sector Uses of FR

Key Messages

  • We have engaged with departments on FR initiatives for immigration and border security, national security, and law enforcement.
  • Some of our recommendations included:
    • Advising institutions to consider whether use of FR is necessary, proportionate and effective in context. 
    • Regarding accuracy, we commented on potential bias and possible disproportionate impact on some individuals in related to CBSA Primary Inspection Kiosks.
    • We recommended the CBSA ensure meaningful consent was obtained during a pilot program to test the efficacy of FR software in an operational border context (Faces on the Move).

Background

  • Since 2010, the Government Advisory Directorate has received four (4) Privacy Impact Assessments and completed six (6) consultations on initiatives explicitly involving facial recognition technology.  Discussions with law enforcement and national security on FR initiatives were only preliminary.
  • We are only aware of FR initiatives that have been reported to us by the CBSA, the Royal Canadian Mounted Police, the Department of National Defence, and Immigration, Refugees and Citizenship Canada.

Prepared by: GA


Private Sector Uses of FR

Key Messages

  • Private companies are increasingly using FR for a variety of purposes.
  • Examples include:
    • Monitoring test-taking in universities;
    • Verifying identity during financial transactions;
    • Controlling access to private property.
  • These uses raise a number of privacy concerns, including:
    • Is meaningful consent being obtained?
    • Are appropriate safeguards in place to protect highly sensitive facial data?
    • How long is the data retained?
    • Will individuals be required to surrender sensitive biometric data to private companies in order to access goods and services?

Background

  • Examples of private sector FR use:
    • Proctortrack has been used by Canadian universities to monitor and verify the identities of students while writing exams; the company was the subject of a breach in October 2020.
    • Canadian banks have deployed FR as a means of verifying identity during financial transactions, including online banking and credit card transactions.
    • A Canadian real estate firm has deployed FR to control access and monitor entry to residential buildings; tenants can opt out of the system, but images are captured automatically from anyone entering the building (including delivery drivers, for example).

Prepared by: PRPA


Authentication

Key Messages

  • Biometrics can be a useful method for authenticating an individual’s identity, and protecting important or sensitive assets.
  • However, they are intimately associated with the human body and cannot be easily changed. If breached, they can expose someone to serious and ongoing harm, such as fraud.
  • Therefore, before using biometrics, organizations should consider whether they are necessary, effective, proportional, and if a less-invasive alternative exists.
  • Biometrics information must be safeguarded to a very high level.

Background

  • Authentication refers to a one-to-one comparison with data previously enrolled by the individual. It is usually far less invasive than identification, which refers to a one-to-many comparison, such as cross-referencing or searching against a database.
  • The stringency of the authentication process should be commensurate with the risks. We suggest that biometrics be used part of a multi-factor authentication system, which combines something an individual knows (e.g. a password), has (e.g. a device or key), and is (e.g. biometrics).
  • Depending on how unique and persistent a biometric identifier is, and how effective the technology used is at data matching, automated recognition systems may produce false-positives or false-negatives. The effectiveness must therefore be considered before such systems are implemented.
  • There are a number of safeguards that can effectively manage the risks in many circumstances, such as:
    • storing the biometrics on the individual’s device,
    • storing them as numerical representations (templates) instead of raw images,
    • using “cancellable” templates that cannot be converted back into the original biometric information, and
    • using encryption.

Prepared by: PRPA


Law Enforcement use of Private Sector FRData

Key Messages

  • Police agencies are responsible for ensuring any use of FR technology complies with the law and that privacy risks are managed appropriately.
  • Private sector organizations also need to meet their privacy obligations when collecting or sharing personal information with or on behalf of the police.
  • My office continues to investigate whether the RCMP’s collection and use of personal information from Clearview AI was in accordance with the federal Privacy Act.

Background

  • Clearview, a private company who actively marketed its services to police, clearly did not acknowledge that the mass collection of biometric information from billions of people, without express consent, violated the reasonable expectations of privacy of individuals. The company was of the view that its business interests outweighed privacy rights.
  • Clearview took the view that its company cannot be held responsible for offering services to law enforcement or any other entity that subsequently makes an error in its assessment of the person being investigated.
  • We are developing guidance for law enforcement agencies on the use of facial recognition technologies, to clarify the legal responsibilities of police agencies, as they currently stand, with a view to ensuring any use of FR complies with the law, minimizes privacy risks, and respects the fundamental human right to privacy.

Prepared by: PRPA
In consultation with: Compliance


FR and Artificial Intelligence (AI)

Key Messages

  • FR systems rely on AI to detect, analyze and compare the biometric features of facial images.
  • Recent advancements in AI modelling techniques have enabled FR algorithms to undergo improvements in accuracy. (see relevant sheet).
  • In November 2020 my office released a series of recommendations for how PIPEDA should be amended to ensure the appropriate regulation of AI. Very few of our recommended measures are in C-11. In our view, this puts Canadians’ rights at continued risk.
  • On April 21, the European Commission introduced its proposed Artificial Intelligence Act, which includes prohibited uses of AI.

Background

  • FR systems have existed since the late 1960s; early versions relied on humans to manually select and measure the landmarks of an individual’s face, while today the process is fully automated using AI.
  • C-11 contains certain measures which go towards our recommended enhancements for regulating AI (but which are not exact): the use of de-identified information for socially beneficial purposes in certain circumstances (s39) and for internal research and development (s21); a form of explanation (s63(3)) and order making (s92(2)). Gaps include lack of: exceptions to consent for compatible purposes and legitimate commercial interests; recognition of privacy as a human right; right to contest; demonstrable accountability (privacy and human rights by design, PIAs for high risk initiatives, traceability, proactive inspections by OPC); AMPs by OPC.
  • The EU’s AIA would prohibit AI that is used: to manipulate behaviour that is likely to cause physical or psychological harm, for social scoring by public authorities, and for real-time remote biometric identification in public spaces by law enforcement.

Prepared by: TAD/PRPA


How FR Technologies Transform Surveillance

Key Messages

  • FR technology can enable the collection of untold amounts of personal, sensitive information about billions of people. This leaves citizens vulnerable to massive, unprecedented, corporate and state surveillance.
  • Newer forms of surveillance are more powerful because they are often covert and, in the case of facial recognition, combine sensitive data points with powerful algorithmic analysis.
  • The risks of mass surveillance make it abundantly clear that the rights and values of citizens must be protected by a strong, rights-based legislative framework and a strong and effective privacy regulator.

Background

  • Both the Cadillac-Fairview and Clearview investigations showed how use of facial recognition capabilities amplify existing surveillance practices.
  • In the instance of the Clearview complaint, we observed mass collection of images along with creation of biometric facial recognition arrays, representing mass identification and surveillance of individuals.
  • In the UK, police have deployed FR in “live” settings to surveil public spaces in real-time. In these implementations, facial data is collected indiscriminately from passers-by and compared against a database of known persons of interest to the police, with matches relayed to nearby officers.

Prepared by: PRPA


Accuracy and Bias

Key Messages

  • Privacy laws require that personal information collected and used as part of a FR initiative be accurate. Inaccuracy in FR systems poses serious risks to individuals’ privacy rights when decisions are made about an individual based on inaccurate information.  It also raises issues of fairness, bias, and discrimination.
  • Current FR algorithms vary widely in quality. Many exhibit differences in accuracy across demographic groups (e.g., race and gender), while some of the most accurate algorithms have “undetectable” differences across demographics.
  • As well, if training data used to generate a FR algorithm lacks diversity, the algorithm may encode such biases by default.
  • To ensure accuracy obligations are met, the FR system as a whole must be considered, from the training data that is used, the FR algorithm employed, the face database against which images are compared, and the nature of human review.

Background

  • FR technology has seen rapid improvements in recent time. Studies have shown that the majority of developers’ algorithms tested in 2018 outperform the most accurate algorithm from 2013; and that the most accurate algorithm from 2020 is substantially more accurate than any in 2018.
  • Studies have indicated that a lack of diverse training data is the main culprit of bias: algorithms developed in China tended to perform better on East Asian faces than Eastern European individuals, thereby reversing the general trend.
  • The disproportionate representation of a group in a face database may lead to a “feedback loop” where use of the FR system leads police to repeatedly cast suspicion on members of the group, their associates or their community, thereby increasing the level of disproportionality over time.

Prepared by: TAD/PRPA


Impact of FR Technologies on Democratic Rights

Key Messages

  • The use of FR in public spaces, including virtual spaces, raises concerns about protection of democratic rights and freedoms.
  • Privacy is vital to dignity, autonomy, and personal growth, and it is a basic prerequisite to the free and open participation of individuals in democracy. For democracy to flourish, individuals must be able to navigate public, semi-public, and private spaces without the risk of their activities being tracked and monitored or their identities being revealed at every step.
  • As such, it is imperative that our privacy frameworks be reformed to recognize and protect privacy as a human right.

Background

  • International DPA response: In 2019 OPC sponsored a GPA resolution which stresses that privacy plays a vital role in enabling other key rights, such as freedom, equality and democracy. The GPA as a whole called upon governments to reaffirm a strong commitment to privacy as a right in itself that supports democratic processes/ institutions.
  • Global concerns: the UN High Commissioner for Human Rights has stated that there should be a moratorium on the use of facial recognition technology in the context of peaceful protests, until States meet certain conditions including transparency, oversight and human rights due diligence before deploying it.

Recent examples include:

  • August 2020 – In Bridges v. South Wales PoliceFootnote 1 the Court of Appeal of England and Wales overturned the High Court’s dismissal of a challenge to South Wales Police’s use of Automated Facial Recognition technology finding that its use was unlawful and violated human rights.Footnote 2 The ruling pertained to a case of an individual whose facial image was recorded during a peaceful protest. It explicitly stated that FR technology violates human rights.
  • Human Rights Watch has expressed “heightened concern” over Chinese FR systems armed with AI technology that can scan faces of protestors in Myanmar, and alert authorities to those on a wanted list.

Prepared by: PRPA


Racialized Communities

Key Messages

  • Despite advances in the sophistication of facial recognition, many FR algorithms still exhibit significant differences in accuracy across demographic groups (e.g., race and gender).
  • Studies have found that facial recognition technologies are more likely to misidentify or produce false positives when assessing faces of people of colour, and particularly women of colour, which could result in discriminatory treatment.
  • A false positive result in the context of a law enforcement investigation creates compelling risks of harm to racialized individuals, especially where they may be disproportionality mistakenly placed under suspicion of serious criminality.

Background

  • Studies have shown that some commercial FR software tends to misidentify individuals from different racial and gender groups at different rates, creating a risk of potential bias in the utilization of FR technology.
  • Given the nature of these inaccuracies, some scholars have expressed concern that, if not appropriately managed, FR use may ultimately serve to deepen existing tensions and inequalities relating to policing institutions.
  • It is imperative that organizations take steps to minimize inaccuracy and bias in any deployment of FR technology.

Prepared by: PRPA


Children, Seniors, Vulnerable Populations

Key Messages

  • The accuracy of FR technology can vary significantly across demographic groups. This creates a risk of unfair, unethical, or discriminatory treatment resulting from FR use.
  • If used inappropriately, FR technology can have lasting and severe effects on privacy and other fundamental rights, and this may be particularly true for social groups that have historically experienced marginalization or vulnerability.
  • When implementing FR for a specific purpose, organizations should carefully consider the unique needs, sensitivities, and disproportionate impacts on vulnerable populations, and address these risks at the outset in the design of the program.

Background

  • Vulnerable populations may have less capacity or diminished digital literacy to fully understand privacy implications that can stem from the analysis of their facial images, or to provide meaningful consent for the collection and use of their images for FR.
  • Vulnerable populations may also be less likely or able to avoid public spaces in which FR is deployed, given disparities in resources and capacity. This may lead vulnerable populations to be over-represented in the deployment of FR systems.
  • That said, there may also be some beneficial uses for vulnerable populations:
    • Law enforcement could use the technology to find missing children or seniors.
    • FR has been used in casinos to identify individuals who voluntarily opt-in to programs for compulsive gamblers.

Prepared by: PRPA


Investigation into RCMP use of Clearview AI

Key Messages

  • Our investigation into the RCMP’s prior use of Clearview AI continues. We intend to report to Parliament on the results of the investigation in the coming weeks.
  • After media reports of Canadian law enforcement using Clearview AI in January 2020, we contacted the RCMP who confirmed it would provide OPC a PIA prior to deploying facial recognition.
  • We were surprised when, on February 27, 2020, the RCMP told OPC that it had been, and was continuing to, use Clearview AI. In light of the RCMP’s acknowledgement of its use of Clearview’s technology, we launched an investigation under the Privacy Act.
  • In July 2020, as a result of our joint investigation with Alberta, B.C. and Quebec into Clearview AI, the company ceased offering its FR services in Canada, including indefinitely suspending its contracts with the RCMP. To our knowledge, the RCMP is no longer using Clearview AI.

Background

  • RCMP public statement: in February 27 2020, the RCMP indicated that:
    • it had two paid licenses and had used Clearview AI in 15 cases resulting in identifying and rescuing two children.
    • it was also aware of limited use of Clearview AI on a trial basis by a few of its units. It stated: “while we recognize that privacy is paramount and a reasonable expectation for Canadians, this must be balanced with the ability of law enforcement to conduct investigations and protect the safety and security of Canadians, including our most vulnerable.”

Prepared by: Compliance


Clearview AI

Key Messages

  • We published our report of findings on February 2, 2021.
  • We found that:
    1. Clearview failed to obtain consent for the collection, use and disclosure of millions of images of Canadians it scraped from various websites, or of the biometric profiles it generated.
    2. Its practices amounted to continual mass surveillance, and were for an inappropriate purpose.
  • Clearview disputed our findings and refused to follow any of our recommendations. This case demonstrates that stronger regulatory tools, including order-making powers and AMPs, are needed to help secure compliance from companies.

Background

  • Provincial cooperation: This was a joint investigation with AB, B.C. and QC.
  • Consent: Clearview did not seek consent for the use of individuals’ personal information, claiming that the websites it scraped were publicly accessible, such that the information was “publicly available”. We found that the information was not publicly available, as defined in the Regulations.
  • Purposes: Clearview indiscriminately collected, used and disclosed personal information in order to allow third-party organizations who subscribed to its service to identify individuals by uploading photos in search of a match.
  • Outcomes: Mid-way through the investigation, Clearview agreed to exit the Canadian market and cease offering its services to Canadians. At the conclusion of our investigation Clearview refused to follow any of the recommendations made by our Offices, which included that it:
    • commit to not re-entering the Canadian market;
    • cease collection, use and disclosure of images and biometric profiles; and,
    • delete the images and biometric arrays in its possession.

Prepared by: Compliance


Cadillac Fairview

Key Messages

  • Our investigation into Cadillac Fairview found that the company used embedded inconspicuous cameras in digital information kiosks at 12 shopping malls to collect customers’ images, and used FR technology to guess their age and gender. 
  • Shoppers had no reason to expect that their sensitive biometric information would be collected and used in this way, and did not consent to this collection or use.
  • While the images were deleted, the sensitive biometric information of 5 million shoppers was sent to a 3rd party service provider, and was stored in a centralized database for no discernable purpose.
  • We remain concerned that Cadillac Fairview refused to commit to ensuring express, meaningful consent is obtained from shoppers should it choose to redeploy the technology in the future.

Background

  • Provincial cooperation: This was a joint investigation with AB, B.C., and involved info-sharing with QC. Our findings were published in October 2020.
  • Collection purposes: Personal information was collected in order to track foot traffic patterns and predict demographic information about mall visitors (e.g. age and gender). Unknown to Cadillac Fairview, a biometric database consisting of 5M numerical representations of faces was also created and maintained by a third-party processor.
  • Outcomes: Cadillac Fairview has ceased use of this technology and has advised that they have no current plans to resume its use. We are concerned that Cadillac Fairview could simply recommence this practice, or one similar, requiring us to either go to court or start a new investigation.
  • Law reform: Cadillac Fairview’s refusal to commit to obtaining express and meaningful consent for future use of this technology demonstrates our need for stronger enforcement powers, including order making and AMPs to better protect Canadians’ privacy.

Prepared by: Compliance


FR Guidance

Key Messages

  • Last year, multiple media reports confirmed that numerous Canadian police agencies were using Clearview AI’s services.
  • While FR presents serious threats to privacy, there can be legitimate uses for public safety when used responsibly and in the right circumstances.
  • We are currently working with our provincial and territorial counterparts to develop joint guidance on police use of facial recognition technology.
  • We intend to consult publicly on this guidance.

Background

  • Police use of FR is regulated through a patchwork of statutes and case law that, for the most part, do not contemplate the risks of FR specifically. This creates room for uncertainty concerning what uses of FR may be acceptable, and under what circumstances.
  • The guidance is meant to clarify legal responsibilities, as they currently stand, with a view to ensuring any use of FR by police agencies complies with the law, minimizes privacy risks and respects the fundamental human right to privacy.
  • Providing guidance specifically on police use of FR is timely given real-world use of FR by the police, the potential for legitimate public safety benefits, and the very serious risks to fundamental rights and freedoms at play.
  • Our office is also developing guidance on the use of biometrics, which includes facial recognition, by public and private organizations.

Prepared by: PRPA


Moratoriums

Key Messages

  • Used responsibly and in the right circumstances, facial recognition can provide benefits to security and public safety. I do not believe it should be banned outright, though the idea of no-go zones for certain specific practices is something to consider.
  • FR requires a strong legal framework to ensure its responsible use, and that there is effective oversight. Necessity and proportionality, as well as transparency, must be clearly set out in these laws.
  • Ultimately, the decision rests with Parliament to consider a moratorium until such time that any specific legislative changes on this issue are recommended and adopted.

Background

  • The EU has proposed legislation on AI that prohibits biometric identification in publicly accessible places for the purpose of law enforcement, with exceptions.
  • In July 2020, the International Civil Liberties Monitoring Group, in conjunction with civil society organizations and privacy advocates, published a letter to Minister Blair calling for a ban on “facial recognition surveillance” by law enforcement.
  • In September 2020, Montreal City Council passed a motion placing restrictions on the use of facial recognition by city police.
  • Several US cities and states have banned law enforcement use of FR. Examples include Boston, San Francisco, Oakland, Portland, Virginia, and Vermont.
  • Some US states, including California and Oregon, have specifically banned police from using FR on body worn camera footage.
  • Over the past year, several major technology firms, including Microsoft, IBM, and Amazon, have publicly committed to self-imposed bans on the sale of FR to police agencies, pending law reform.

Prepared by: PRPA


Current Laws on FR Use

Key Messages

  • FR in Canada is regulated by a patchwork of laws, except for in Quebec, which has a specific biometrics regime. Since FR operates through the use of personal information, it is captured by privacy legislation, including PIPEDA for private companies, and the Privacy Act for federal institutions.
  • Police and state use of FR is limited by the Charter, which grants individuals the right to be free from unreasonable search and seizure. The Supreme Court of Canada has recognized a general right to anonymity, and a right to privacy within public spaces.
  • The lack of specific legislation for police FR use stands in contrast with the DNA Identification Act and the Identification of Criminals Act, which regulate how DNA samples, fingerprints, and mugshots can be collected, used, disclosed, and destroyed.

Background

  • In Quebec, the Act to Establish A Legal Framework for Information Technology governs the use of biometrics. Proposed Bill 64 requires organizations to notify the Commission at least 60 days before a biometric database is brought into service.
  • Charter s. 8 grants that unless a search is authorized by law, it will be unlawful where an individual has a reasonable expectation of privacy. Anonymity is part of the right to privacy (R v. Spencer, 2014 SCC 43), and individuals retain expectations of privacy in public spaces (R v. Jarvis, 2019 SCC 10).
  • The DNA Identification Act and Identification of Criminals Act set out thresholds for the collection of their specified biometrics, as well as conditions for collection (including compelled production and judicial warrants). The DNA Identification Act uses highly specific rules to limit how the National DNA Databank may be used. This reflects the sensitivity of the personal information involved.

Prepared by: Legal


Bill C-11: Need for Rights Based Framework

Key Messages

  • Our investigation into Clearview AI highlighted the shortcomings of the current law, many of which are left unaddressed by C-11.
  • C-11 maintains that privacy and commercial interests are competing interests that must be balanced, instead of recognizing privacy as a human right.
  • Our laws must better regulate the use of facial recognition and other new technologies, and ensure that where there is a conflict between commercial objectives and privacy protection, Canadians’ privacy rights prevail.
  • We have proposed that the purpose clause of our private sector privacy law should be amended to provide a more appropriate weighting for privacy rights by adding privacy considerations to balance economic factors. It should also provide a more clear statement of the law’s purpose and constitutional grounding to strengthen the constitutional validity of the law. I would be pleased to provide this text to you.

Background

Our proposed purpose clause:

In an era in which significant economic activity relies on the analysis, circulation and exchange of personal information and its movement across interprovincial and international borders, the purpose of this Act is to establish rules to govern the protection of personal information to promote confidence and therefore the sustainability of information-based commerce by establishing rules to govern the protection for the lawful, fair, proportional, transparent and accountable collection, use and disclosure of personal information in a manner that recognizes

      1. the fundamental right of privacy of individuals,
      2. with respect to their personal information the need of organizations to collect, use or disclose personal information for purposes and in a manner that a reasonable person would consider appropriate in the circumstances, and
      3. where personal information moves outside Canada, that the level of protection guaranteed under Canadian law should not be undermined.

Prepared by: PRPA


Bill C-11: AMPs

Key Messages

  • Despite our finding that Clearview violated Canadian law, the company has refused to implement our recommendation to delete images collected from individuals in Canada.
  • Stronger regulatory tools, including order-making powers and AMPs are needed to help secure compliance from companies like Clearview.
  • C-11 does not adequately address these shortcomings, since proposed AMP provisions would not have applied to the violations Clearview committed, and the tribunal process would create delays in enforcing AMPs.
  • There are models, such as the UK Data Protection Act, that could be emulated for issuing penalties for a broader range of violations. Under this model, when appropriate, the Commissioner could give an organization an enforcement notice, clarifying the nature of a violation, before proceeding to the recommendation or the imposition of a penalty.

Background

  • Clearview has ceased offering its services to Canadians. At the conclusion of our investigation Clearview refused to follow any of the recommendations made by our Offices, which included that it (i) commit to not re-entering the Canadian market; (ii) cease collection, use and disclosure of images and biometric profiles; and, (iii) delete the images and biometric arrays in its possession.
  • An examination of AMP regimes in the UK, Australia, Singapore, Quebec (Bill 64), Ontario (under the PHIPA), and California revealed no cases where only violations of such a narrow list of legal requirements are subject to penalties.
  • AMPs in the UK include an optional two-step process: Where the Commissioner is “satisfied that a person has failed” (a violation has occurred), they may immediately issue a “payment notice,” requiring payment of a specified amount (s. 155). However, the Commissioner can also issue an “enforcement notice” which requires that an organization either takes, or refrains from taking, steps set out in the notice (s. 149). Failure to meet the conditions of the enforcement notice could then lead to the issuance of a payment notice.

Prepared by: PRPA


Privacy Act Reform

Key Messages

  • A number of the Department of Justice’s proposals on Privacy Act reform would bring positive changes to the law to deal with the privacy risks posed by emerging technologies such as FR
  • For instance, the proposal to define and more clearly regulate the collection and use of publicly available personal information by having all of the Act’s rules apply to such information, which is not currently the case.
  • Obligations for “privacy by design”, PIAs, and oversight (including proactive audit powers) are also necessary measures to promote and enforce compliance for FR initiatives.
  • In our submission to DoJ, we made a number of recommendations that will be important for ensuring effective regulation of FR.  These include identifying key elements for regulating AI, proposing modifications to enhance the collection threshold and refining their proposed definition for publicly available personal information.

Background

  • Framework for “Publicly Available” Personal Information: We recommend that DoJ’s proposed definition be enhanced by explicitly stating that publicly available personal information does not include information in respect of which an individual has a reasonable expectation of privacy.
  • Artificial Intelligence: We recommend the Act include a definition of automated decision-making, a right to meaningful explanation and human intervention, a standard for the level of explanation required, and obligations for traceability.
  • Collection Threshold: We believe the collection standard “reasonably required” generally strikes the right balance, but have proposed key modifications to add clarity around specified purposes and proportionality.

Prepared by: PRPA


Privacy Act Reform: AI Proposals

Key Messages

  • Justice’s discussion paper suggests that there should be certain rights and accountability requirements specific to automated decision-making (ADM) in the law, and that such requirements could align with existing federal policy instruments on ADM.
  • We agree with this approach, given the unique privacy risks that such systems introduce. Enshrining in law the right to explanation and human intervention, which the TBS Directive on ADM provides for, would be particularly important in the public sector context to respect natural justice.
  • However, we recommend the law define a specific standard for the level of explanation required to allow individuals to understand how their information led to a particular decision, and require institutions to log and trace personal information used in automated decision-making.

Background

  • Inferences: We support Justice’s proposal to clarify that drawing inferences about an individual would qualify as a collection of personal information.
  • Standard for explanations: (i) the nature of the decision to which they are being subject and the relevant personal information relied upon, and (ii) the rules that define the processing and the decision’s principal characteristics.
  • Traceability: The law should also require traceability, which would allow an institution to locate the personal information that a machine used to reach a decision. Quebec’s Bill 64, the proposed AI EU Law, and recent amendments to Ontario’s PHIPA include requirements related to traceability.
  • Collection Threshold: Justice also proposes relaxing the “reasonably required” collection threshold, in part for AI. However, we heard in our AI consultation that data minimization was realistic, particularly in light of greater flexibility to use de-identified information. As Justice proposes a good framework for de-identification as well, expanding the collection threshold isn’t required for AI to be used.

Prepared by: PRPA


Other Jurisdictions

Key Messages

  • For the most part, courts and regulators are trying to apply existing privacy and biometrics laws while policy makers consider what form additional regulation, if any, should take.
  • Existing laws for biometrics generally entail stronger privacy obligations such as registration and secure storage of biometric databases, as in Quebec, or the right to sue for collection of biometric information without consent, as in Illinois.
  • California and Oregon have banned the use of FR in police body-worn cameras. San Francisco, Virginia and Oakland, have banned the use of FR by the police and other public bodies.
  • The Australian parliament is working on a bill that would establish a national FR database for government use.

Background

  • Biometrics are a special category of data under the GDPR. Per Article 9, processing of special categories is prohibited unless a separate condition for processing is met. A DPIA is required for any type of processing likely to be high risk, meaning one is generally required for processing biometric data.
  • In 2019, San Francisco banned the use of FR software by the police, transit and other agencies. Oakland passed an ordinance prohibiting “acquiring, obtaining, retaining, requesting, or accessing” FR images.
  • The Australian Identity-matching Services Bill would allow sharing of identification information, including facial images, between the federal, states and territories for the purposes of identity-matching to prevent identity crime, support law enforcement, uphold national security, and improve service delivery.

Prepared by: PRPA


EU Ban

Key Messages

  • The European Commission’s proposed Artificial Intelligence Act, if adopted, would clearly and explicitly outlaw harmful applications of AI that are incompatible with human rights.
  • They propose to prohibit AI systems used for mass surveillance, including real-time remote biometric identification in public spaces by law enforcement (subject to exceptions).
  • We are not supportive of a complete ban on the use of FR, though believe specific no-go zones should be considered. Even in the absence of no-go zones, OPC should play an oversight role to enforce rules such as necessity and proportionality to ensure the appropriate use of FR.

Background

  • The proposed EU Law prohibits: AI that deploys “subliminal techniques” to manipulate behaviour (including exploiting vulnerabilities of groups), use by public authorities for social scoring, and use for remote biometric identification by law enforcement in publicly accessible spaces.
  • Exceptions to prohibition on remote biometric identification include: (1) for search of victims, (2) prevention of terrorism or specific, substantial, and imminent threat to life/safety, and (3) identification of suspects of criminal offences that carry a minimum 3 year sentence.
  • All other remote biometric identification, including by commercial organizations, are designated as “high-risk” and are subject to a number of requirements such as: risk and quality assessments, logging and record-keeping for traceability, human oversight, accurate and representative data for AI training, ex-ante conformity assessments, and demonstrable accountability.
    • Our AI recommendations included: the conduct of a PIA to assess risk; a balancing test to assess the purpose, necessity, and proportionality of the measure and consideration of the interests and fundamental rights of the individual; use of de-identification where possible; a right to meaningful explanation and to contest; traceability; and demonstrable accountability, among others.

Prepared by: PRPA


Date modified: