Our current laws regulating the use of facial recognition software do not offer sufficient protection against the risks the technology creates, says Daniel Therrien, Canada’s Privacy Commissioner.
Facial recognition software is powerful but invasive technology as it relies on biometrics – an individual’s unchangeable, permanent characteristics. Use of such technology is often done without consent, reducing personal autonomy by lessening the control people have over their personal information, and erodes human rights, including the right to participate in democratic life without surveillance.
Joint Investigation of Clearview AI, 2021 CanLII 9227 (PCC)1
On February 2, 2021, the Office of the Privacy Commissioner of Canada, along with privacy regulators in Quebec, Alberta, and British Columbia, released their findings from a joint investigation into Clearview’s collection, use, and disclosure of personal information through their facial recognition tool in early 2020, when news reports surfaced confirming a number of Canadian law enforcement agencies and private organizations engaged Clearview’s services in order to identify individuals.
Clearview used publicly accessible online sources, like social media, to amass an extensive database of over three billion images of faces, the vast majority Canadian. Users would upload an image of their target into the Clearview App and run a search where Clearview’s neural network then compared the data points created from the target image against their database, pulling matches and providing them to the user, along with the associated metadata.
The investigation found that:
- Clearview made no attempt to obtain consent from the target, violating various privacy laws; and
- Clearview’s purpose for collecting, using, or disclosing personal information was not appropriate or legitimate, and represented the mass identification and surveillance of individuals by a private entity in the course of commercial activity.
Clearview withdrew from the Canadian market in the course of the OPC’s investigation.
Guidelines for Facial Recognition Technology in Law Enforcement
The Standing Committee on Access to Information, Privacy, and Ethics has been studying the use and impact of facial recognition technology since December 13, 2021.
On May 2, 20222, the Privacy Commissioner of Canada presented final guidelines for the use of facial recognition technology in law enforcement, following their investigation into Clearview AI’s mass surveillance of Canadians and a national consultation on police use of facial recognition.
In order to assist police in ensuring that the use of facial recognition technology complies with the law, minimizes risks, and respects privacy rights, the privacy regulators recommended that:
- the law should clearly and explicitly define the purposes for which police would be authorized to use facial recognition technology;
- the law should require police use of facial recognition to be both necessary and proportionate for any given deployment of the technology;
- police use of facial recognition be subject to strong, independent oversight; and
- appropriate privacy protections be put in place to mitigate risks to individuals, including measures addressing accuracy, retention, and transparency in facial recognition initiatives.
Ontario’s Privacy Commissioner expanded on five key elements of those guidelines, stating that police agencies:
- cannot assume, and must establish, that they are lawfully authorized to use facial recognition, ensuring Charter compliance and that their purported use is necessary and proportionate;
- must establish strong, annually reviewed, accountability measures, including designing for privacy at every stage of a facial recognition initiative;
- must ensure the quality and accuracy of personal information to avoid false positives;
- should not retain personal information for longer than necessary; and
- must address transparency and public engagement, and could include publishing formal policies on use of facial recognition technology.
On June 16, 2022, Bill C-27 went through its first reading at the Canada House of Commons3.
Part 1 enacts the Consumer Privacy Protection Act, which governs the protection of personal information of individuals while taking into account the need of organizations to collect, use, or disclose personal information in the course of commercial activities. Section 6(4) of this proposed Act precludes its application to any government institution to which the Privacy Act applies, although its applicability to corporations that government institutions contract with is unclear at this point.
Part 2 establishes an administrative tribunal to hear appeals of certain decisions made by the Privacy Commissioner under the Consumer Privacy Protection Act.
Part 3 enacts the Artificial Intelligence and Data Act, which regulates commerce in artificial intelligence systems by requiring that certain persons adopt measures to mitigate risks of harm and biased output related to high impact AI systems, like Clearview.
Clearview AI’s brief period of operation in Canada revealed significant risks to personal privacy posed by facial recognition technology, attracting the attention of privacy regulators across the country as well as federal lawmakers. Although updated privacy laws are still in the legislative process, if police choose to use facial recognition in the future, it will be paramount for each law enforcement agency to build appropriate procedures to mitigate the risks associated with such technology and protect the privacy interests of those who may be affected.
1 2021 CanLII 9227 (PCC) | Joint investigation of Clearview AI, Inc. by the Office of the Privacy Commissioner of Canada, the Commission d’accès à l’information du Québec, the Information and Privacy Commissioner for British Columbia, and the Information Privacy Commissioner of Alberta | CanLII