that it does not fully automate face identification decisions. People are integral to these decisions because human oversight can ensure accuracy, accountability and ethical use. Face identification decisions can have negative impacts on people’s lives, potentially restricting their access to government services, freedom to travel across national borders or even leading to their wrongful arrest3,4 . Face identification systems, which incorporate AI and human decision-making, can be designed to limit these negative impacts, and ensure that they do not disproportionately affect socio-economic or demographic groups. To address these emerging issues, we convened an international workshop of researchers in face identification from psychology, forensic science, artificial intelligence and law — with practitioners and policy-makers from police and government (see Workshop Members). We hope that outcomes can assist in development of policy and implementation of face identification and identity management systems in government, police, private industry and
the judicial system. The main conclusions and recommendations of the workshop are: • Face identification is now a mature multi-disciplinary field incorporating forensic science, cognitive psychology and artificial intelligence research. Compared to other biometric and pattern-matching disciplines, there is extensive research on the performance of humans and face recognition technology in face identification tasks.
This research provides a foundation of scientific understanding that can provide the basis for designing accurate, fair, responsible and transparent human use of face recognition technology. • Recent research shows that accuracy of the best Artificial Intelligence (AI) face recognition technology and the best humans are comparable, but performance is optimized by combining decisions made by the best AI and the best humans.
A key challenge is to incorporate these research findings into operational systems with appropriate human oversight. To do this, it is first necessary to have agreed protocols
2 Centre for Data Ethics and Innovation (2020) Snapshot series: Facial recognition technology report.
3 Wrongfully accused by an algorithm (2020). New York Times.
4 Georgetown Law (2015). The perpetual line up: Unregulated police face recognition in America; Georgetown Law (2019). Face
recognition on flawed data.
For determining what are the ‘best’ performing people and face recognition
technologies. We refer to best-performing solutions as face identification ‘experts’.
• Face identification ‘experts’ must consistently demonstrate superior
performance on tasks representative of the claimed expertise. The workshop
unanimously agreed that qualification as an ‘expert’ in making face identification
decisions should be based solely on proven superior performance — not on secondary indicators of expertise like a person’s professional experience or training. Experts can be trained staff, novices with natural talent in the task or indeed AI technology, so long as their superior performance has been demonstrated. This definition can help create an effective face identification workforce, guide better design of face identification systems and provide the basis for legal definitions of expertise that are used to determine the admissibility of expert testimony in court.
• There is substantial variation in accuracy and performance between individual
experts, and between different face recognition algorithms. Research shows
variable accuracy amongst even the most accurate algorithms and humans. Patterns of errors also vary depending on the type of face identification decisions being made. For example, certain people and algorithms make more errors on faces from certain demographic groups. Progress is being made in creating calibrated tests for human and algorithm performance that can help select appropriate experts for specific tasks and reduce bias in face identification systems.
• New types of expert practitioners and researchers are required to design,
evaluate, oversee, and explain modern face identification systems. Because
these systems incorporate human and AI decision making, people with broad
expertise in related disciplines are required. The workshop members are part of the emerging field of face identification, which is characterised by an integration of applied and theoretical questions, and of research and practice. Multidisciplinarity of our field entails that: (i) the next generation of researchers should be ‘multilingual’ in the discipline areas that intersect this new field; (ii) future face identification practitioners will require more diverse knowledge of forensic science, psychology and artificial intelligence to use face recognition technology appropriately; (iii) organisations deploying face identification systems will require similarly diverse expertise to implement, manage, evaluate, and explain these complex systems. Part 1 of this report provides background to the workshop. Part 2 is a digested analysis of our discussion, outcomes and recommendations. Part 3 captures discussions on future research directions, which are primarily directed towards researchers in this field. In this
section, we also outline plans for disseminating workshop outcomes and sustaining collaboration between academics, policy-makers and practitioners. A detailed record of the meeting schedule is provided in Appendix A1.
|Number of pages||53|
|Publication status||Published - Aug 2020|
|Event||Ealuating Face Identification Expertise: Turning Theory into Practice - AGSM Building, UNSW, Sidney, Australia|
Duration: 6 Jan 2020 → 7 Jan 2020
|Workshop||Ealuating Face Identification Expertise|
|Period||6/01/20 → 7/01/20|