
[ad_1]
photo: 123 RF
- Police introduce facial recognition policy for the first time, with some strict and some loose
- It is still a prudent move to ban its use in live broadcasts
- But a lack of external oversight runs counter to Europe’s world-first AI law
The police force has introduced its first-ever facial recognition policy, which currently prohibits the use of facial recognition technology in live camera footage.
While it is stricter than new European laws in this regard, it is more relaxed in other respects.
Police already use facial recognition in at least five different ways, but their The lack of relevant policies has been apparent since a misguided trial in 2020about Clearview AI.
The new six-page policy states that its use must be approved and monitored and can only be used for images that have been legally collected.
The report said there must be a significant delay between when the video is collected and when it is analyzed because the risks of real-time facial recognition “outweigh the potential benefits.”
The EU’s new AI law allows some uses of live video within strict limits, but it No longer drastically reducing its usage in May.
The EU also has some external oversight, which is absent from New Zealand’s new policy; here, police conduct their own audits.
Read more:
The use of facial recognition by police is expanding around the world and is often controversial, for example in the UK. Deployment stepped up this month to curb far-right unrestignoring protests from more than two dozen human rights organizations.
While Europe has begun legislating to regulate police use of non-contact weapons, and some countries, including France, have gone further and introduced outright bans, New Zealand has taken a different path: “self-regulation through policy and practice guidelines,” as described by Nessa Lynch, who led a study of New Zealand police use of non-contact weapons a few years ago.
The new six-page policy says the use of FRT will be “approved, controlled, monitored and managed” and internally audited where possible, and deployed only on images that police can legally access.
“This policy ensures that police have appropriate safeguards in place around the use of FRTs and the storage of personal information, and that the use of FRTs is lawful, proportionate and appropriate in the New Zealand policing environment.”
Lawful acquisition is key: Police have thousands of illegally collected images, mostly of young Māori people Trying to find a way to delete it based on the Privacy Commissioner’s order.
Additionally, police have access to thousands of privately owned CCTV cameras, many of which are connected to networks with facial recognition capabilities.
“Police do not use the real-time FRT capabilities of third-party systems,” the policy states.
Three years ago, they formed an independent expert panel on emerging technologies and began to compile a list of various technologies in an effort to be more transparent. The list is not exhaustive and does not, for example, include technologies used by police to scrape social media.
The new policy lists five ways police already use FRT, such as retrospectively matching millions of images of people in a database in investigations; missing persons investigations; helping identify the dead; and registering gun owners.
They used the BriefCam system with facial recognition to speed through hours of video. French police have gotten in trouble for using BriefCam because it is illegal under the country’s ban.
A second locally used FRT tool, provided by Israeli company Cellebrite, can break mobile phone encryption. It was criticised last year for using it to tackle welfare cheaters.
Lynch and other researchers have found that New Zealand police have been relatively cautious in rolling out facial recognition technology compared with U.S. law enforcement, and the new policy banning its use in the field reflects that.
Police said they would not make a decision on implementing on-site FRT until they fully understood the security, privacy, legal and ethical implications, and had engaged with the community to understand their views.
However, the EU has been more explicit in banning the use of certain types of identification technologies in biometrics because they pose “unacceptable risks” and threats to humans.
Its blacklist includes biometric identification and classification of personnel, as well as most forms of on-site FRT.
The EU’s world-first AI bill provides a “way out” for real-time use of AI to commit serious crimes.
Its rules on delayed use are stricter than New Zealand’s, indicating a judge or other external sign-off is usually required.
RNZ asked police to comment on the new policy’s lack of oversight or “serious crime” threshold for the use of FRT.
Local policies say moderate use requires consideration of human rights and privacy.
The technology, which is widely used by police, must be “sufficiently accurate and … not carry unacceptable levels of bias or discrimination”.
The retention, storage and destruction of images must comply with other police regulations.
Images can often be stored for months; in addition, FRT involves private companies’ technical systems and sometimes stores data in cloud computing centers in Australia.
The policy does not mention Māori or Te Tiriti at all. RNZ has asked police why it does not. Māori lawyers say Māori consider biometrics to be a form of “taonga”.
The regulations come as the Privacy Commissioner’s Office is still working on biometric codes (your face is a biometric, as are your fingerprints and even your gait).
In 2021, Scotland became the first Implementing a legally binding code of practice for biometric policing.
“There are many examples of good practice in terms of strict guidelines or oversight by independent observers and auditors, but there is always a risk that internal policy settings may change due to changes in leadership or attitudes,” Lynch said in a new control study in May.
“As part of the policy development, we shared it with an independent panel that looks at policing practices,” police said. “We sought input from Māori members of our internal Māori, Pacific and Ethnic Minority Services Group and the Emerging Technologies Expert Group.”
“We have also kept the Office of the Privacy Commissioner informed throughout this process and are in regular contact with them. There is also wider work ongoing around how police handle Māori data, which is an ongoing discussion across many government agencies.”
[ad_2]
Source link