Severity-sensitive norm-governed multi-agent planning

Luca Gasparini, Timothy J. Norman*, Martin J. Kollingbaum

*Corresponding author for this work

Research output: Contribution to journalArticle

1 Citation (Scopus)
5 Downloads (Pure)

Abstract

In making practical decisions, agents are expected to comply with ideals of behaviour, or norms. In reality, it may not be possible for an individual, or a team of agents, to be fully compliant—actual behaviour often differs from the ideal. The question we address in this paper is how we can design agents that act in such a way that they select collective strategies to avoid more critical failures (norm violations), and mitigate the effects of violations that do occur. We model the normative requirements of a system through contrary-to-duty obligations and violation severity levels, and propose a novel multi-agent planning mechanism based on Decentralised POMDPs that uses a qualitative reward function to capture levels of compliance: N-Dec-POMDPs. We develop mechanisms for solving this type of multi-agent planning problem and show, through empirical analysis, that joint policies generated are equally as good as those produced through existing methods but with significant reductions in execution time.

Original languageEnglish
Pages (from-to)26-58
Number of pages33
JournalAutonomous Agents and Multi-Agent Systems
Volume32
Issue number1
Early online date7 Jul 2017
DOIs
Publication statusPublished - Jan 2018

Fingerprint

Planning
Decision making
Compliance

Keywords

  • Dec-POMDPs
  • Multi-agent planning
  • Norms

ASJC Scopus subject areas

  • Artificial Intelligence

Cite this

Severity-sensitive norm-governed multi-agent planning. / Gasparini, Luca; Norman, Timothy J.; Kollingbaum, Martin J.

In: Autonomous Agents and Multi-Agent Systems, Vol. 32, No. 1, 01.2018, p. 26-58.

Research output: Contribution to journalArticle

Gasparini, Luca ; Norman, Timothy J. ; Kollingbaum, Martin J. / Severity-sensitive norm-governed multi-agent planning. In: Autonomous Agents and Multi-Agent Systems. 2018 ; Vol. 32, No. 1. pp. 26-58.
@article{e6634ae6401d408c98add2d957dbbeff,
title = "Severity-sensitive norm-governed multi-agent planning",
abstract = "In making practical decisions, agents are expected to comply with ideals of behaviour, or norms. In reality, it may not be possible for an individual, or a team of agents, to be fully compliant—actual behaviour often differs from the ideal. The question we address in this paper is how we can design agents that act in such a way that they select collective strategies to avoid more critical failures (norm violations), and mitigate the effects of violations that do occur. We model the normative requirements of a system through contrary-to-duty obligations and violation severity levels, and propose a novel multi-agent planning mechanism based on Decentralised POMDPs that uses a qualitative reward function to capture levels of compliance: N-Dec-POMDPs. We develop mechanisms for solving this type of multi-agent planning problem and show, through empirical analysis, that joint policies generated are equally as good as those produced through existing methods but with significant reductions in execution time.",
keywords = "Dec-POMDPs, Multi-agent planning, Norms",
author = "Luca Gasparini and Norman, {Timothy J.} and Kollingbaum, {Martin J.}",
note = "This research was funded by Selex ES. The software developed during this research, including the norm analysis and planning algorithms, the simulator and harbour protection scenario used during evaluation is freely available from doi:10.5258/SOTON/D0139",
year = "2018",
month = "1",
doi = "10.1007/s10458-017-9372-x",
language = "English",
volume = "32",
pages = "26--58",
journal = "Autonomous Agents and Multi-Agent Systems",
issn = "1387-2532",
publisher = "Springer Netherlands",
number = "1",

}

TY - JOUR

T1 - Severity-sensitive norm-governed multi-agent planning

AU - Gasparini, Luca

AU - Norman, Timothy J.

AU - Kollingbaum, Martin J.

N1 - This research was funded by Selex ES. The software developed during this research, including the norm analysis and planning algorithms, the simulator and harbour protection scenario used during evaluation is freely available from doi:10.5258/SOTON/D0139

PY - 2018/1

Y1 - 2018/1

N2 - In making practical decisions, agents are expected to comply with ideals of behaviour, or norms. In reality, it may not be possible for an individual, or a team of agents, to be fully compliant—actual behaviour often differs from the ideal. The question we address in this paper is how we can design agents that act in such a way that they select collective strategies to avoid more critical failures (norm violations), and mitigate the effects of violations that do occur. We model the normative requirements of a system through contrary-to-duty obligations and violation severity levels, and propose a novel multi-agent planning mechanism based on Decentralised POMDPs that uses a qualitative reward function to capture levels of compliance: N-Dec-POMDPs. We develop mechanisms for solving this type of multi-agent planning problem and show, through empirical analysis, that joint policies generated are equally as good as those produced through existing methods but with significant reductions in execution time.

AB - In making practical decisions, agents are expected to comply with ideals of behaviour, or norms. In reality, it may not be possible for an individual, or a team of agents, to be fully compliant—actual behaviour often differs from the ideal. The question we address in this paper is how we can design agents that act in such a way that they select collective strategies to avoid more critical failures (norm violations), and mitigate the effects of violations that do occur. We model the normative requirements of a system through contrary-to-duty obligations and violation severity levels, and propose a novel multi-agent planning mechanism based on Decentralised POMDPs that uses a qualitative reward function to capture levels of compliance: N-Dec-POMDPs. We develop mechanisms for solving this type of multi-agent planning problem and show, through empirical analysis, that joint policies generated are equally as good as those produced through existing methods but with significant reductions in execution time.

KW - Dec-POMDPs

KW - Multi-agent planning

KW - Norms

UR - http://www.scopus.com/inward/record.url?scp=85021914982&partnerID=8YFLogxK

U2 - 10.1007/s10458-017-9372-x

DO - 10.1007/s10458-017-9372-x

M3 - Article

VL - 32

SP - 26

EP - 58

JO - Autonomous Agents and Multi-Agent Systems

JF - Autonomous Agents and Multi-Agent Systems

SN - 1387-2532

IS - 1

ER -