Using Knowledge Graphs to Unlock Practical Collection, Integration, and Audit of AI Accountability Information

Iman Naja, Milan Markovic* (Corresponding Author), Pete Edwards, Wei Pang, Caitlin Cottrill, Rebecca Williams

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)
25 Downloads (Pure)

Abstract

To enhance trustworthiness of AI systems, a number of solutions have been proposed to document how such systems are built and used. A key facet of realizing trust in AI is how to make such systems accountable - a challenging task, not least due to the lack of an agreed definition of accountability and differing perspectives on what information should be recorded and how it should be used (e.g., to inform audit). Information originates across the life cycle stages of an AI system and from a variety of sources (individuals, organizations, systems), raising numerous challenges around collection, management, and audit. In our previous work, we argued that semantic Knowledge Graphs (KGs) are ideally suited to address those challenges and we presented an approach utilizing KGs to aid in the tasks of modelling, recording, viewing, and auditing accountability information related to the design stage of AI system development. Moreover, as KGs store data in a structured format understandable by both humans and machines, we argued that this approach provides new opportunities for building intelligent applications that facilitate and automate such tasks. In this paper, we expand our earlier work by reporting additional detailed requirements for knowledge representation and capture in the context of AI accountability; these extend the scope of our work beyond the design stage, to also include system implementation. Furthermore, we present the RAInS ontology which has been extended to satisfy these requirements. We evaluate our approach against three popular baseline frameworks, namely, Datasheets, Model Cards, and FactSheets, by comparing the range of information that can be captured by our KGs against these three frameworks. We demonstrate that our approach subsumes and extends the capabilities of the baseline frameworks and discuss how KGs can be used to integrate and enhance accountability information collection processes.
Original languageEnglish
Pages (from-to)74383 - 74411
Number of pages31
JournalIEEE Access
Volume10
Early online date6 Jul 2022
DOIs
Publication statusPublished - 20 Jul 2022

Bibliographical note

Funding information: This work was supported by an award made by the UKRI Digital Economy programme to the RAInS project (ref: EP/R033846/1 and EP/R03379X/1).

Keywords

  • Accountability
  • AI Systems
  • Machine Learning
  • Ontology
  • Provenance

Fingerprint

Dive into the research topics of 'Using Knowledge Graphs to Unlock Practical Collection, Integration, and Audit of AI Accountability Information'. Together they form a unique fingerprint.

Cite this