Arrested by AI; Police Ignore Standards After Face Recognition Matches
Police in the US state of Iowa have been found to be relying on facial recognition technology to identify suspects, and in some cases, ignoring departmental standards in the process.
A recent report by the Georgetown Law Center’s Privacy and Technology Project found that the Des Moines Police Department is using facial recognition software to make arrests, without always following protocols for verifying identifications.
In one instance, a man was arrested and booked into jail using facial recognition matches, despite departmental guidelines recommending that officers gather additional evidence, such as video or eyewitness accounts, to support an arrest.
The report revealed that the facial recognition system is often used by police to rapidly identify suspects based on photos obtained from social media, surveillance footage, or criminal databases. In many cases, the system can produce multiple results, but it is up to the officer using the technology to decide which result is accurate and whether to use it to justify an arrest.
The report has raised concerns that the use of facial recognition technology by police without proper oversight could lead to mistakes, misidentification, and unlawful arrests. Facial recognition systems can be biased due to the large amount of pre-existing data and the algorithms used to analyze them, which means that people from certain racial, ethnic, or gender groups may be disproportionately misidentified.
Des Moines Police Chief Dana Wingert has defended the use of facial recognition technology, stating that it is a “powerful tool” that can help officers “make more efficient and effective use of their time.” However, the report suggests that the lack of transparency and accountability surrounding the technology’s use could lead to its misuse and perpetuate biased policing practices.