Systems of autonomous and self-interested agents interacting to achieve individual and collective goals may exhibit undesirable or unexpected behaviour if left unconstrained. Norms have been widely proposed as a means of defining and enforcing societal constraints by using the deontic concepts of obligations, permissions and prohibitions to describe what must, may and should not be done, respectively. However, recent efforts to provide norm-enabled agent architectures that guide plan choices suffer from interfering with an agent’s reasoning process, and thus limit the agent’s autonomy more than is required by the norms alone. In this paper we describe an extension of the Beliefs- Desires-Intentions (BDI) architecture that enables normative reasoning used to help agents choose and customise plans taking norms into account. The paper makes three significant contributions: we provide a formal framework to represent norms compactly and to manage them; we present a formal characterisation of the normative positions induced by norms of an agent’s execution within a given time period; and finally, we put forth a mechanism for plan selection and ranking taking into consideration a set of normative restrictions.
|Number of pages||20|
|Journal||Engineering Applications of Artificial Intelligence|
|Early online date||22 May 2015|
|Publication status||Published - Aug 2015|
- multi-agent systems