TY - GEN
T1 - Monitoring plan optimality using landmarks and domain-independent heuristics
AU - Pereira, Ramon Fraga
AU - Oren, Nir
AU - Meneguzzi, Felipe
N1 - Publisher Copyright:
© 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
WS-17-01: Artificial Intelligence and Operations Research for Social Good; WS-17-02: Artificial Intelligence, Ethics, and Society;
WS-17-03: Artificial Intelligence for Connected and Automated Vehicles; WS-17-04: Artificial Intelligence for Cyber Security;
WS-17-05: Artificial Intelligence for Smart Grids and Buildings;
WS-17-06: Computer Poker and Imperfect Information Games;
WS-17-07: Crowdsourcing, Deep Learning and Artificial Intelligence Agents; WS-17-08: Distributed Machine Learning;
WS-17-09: Joint Workshop on Health Intelligence;
WS-17-10: Human-Aware Artificial Intelligence;
WS-17-11: Human-Machine Collaborative Learning;
WS-17-12: Knowledge-Based Techniques for Problem Solving and Reasoning; WS-17-13: Plan, Activity, and Intent Recognition;
WS-17-14: Symbolic Inference and Optimization;
WS-17-15: What's Next for AI in Games?
PY - 2017
Y1 - 2017
N2 - When acting, agents may deviate from the optimal plan, either because they are not perfect optimizers or because they interleave multiple unrelated tasks. In this paper, we detect such deviations by analyzing a set of observations and a monitored goal to determine if an observed agent's actions contribute towards achieving the goal. We address this problem without pre-defined static plan libraries, and instead use a planning domain definition to represent the problem and the expected agent behavior. At the core of our approach, we exploit domain-independent heuristics for estimating the goal distance, incorporating the concept of landmarks (actions which all plans must undertake if they are to achieve the goal). We evaluate the resulting approach empirically using several known planning domains, and demonstrate that our approach effectively detects such deviations.
AB - When acting, agents may deviate from the optimal plan, either because they are not perfect optimizers or because they interleave multiple unrelated tasks. In this paper, we detect such deviations by analyzing a set of observations and a monitored goal to determine if an observed agent's actions contribute towards achieving the goal. We address this problem without pre-defined static plan libraries, and instead use a planning domain definition to represent the problem and the expected agent behavior. At the core of our approach, we exploit domain-independent heuristics for estimating the goal distance, incorporating the concept of landmarks (actions which all plans must undertake if they are to achieve the goal). We evaluate the resulting approach empirically using several known planning domains, and demonstrate that our approach effectively detects such deviations.
UR - http://www.scopus.com/inward/record.url?scp=85046106639&partnerID=8YFLogxK
M3 - Published conference contribution
AN - SCOPUS:85046106639
VL - WS-17-01 - WS-17-15
SP - 867
EP - 873
BT - AAAI Workshop - Technical Report
PB - AI Access Foundation
T2 - 31st AAAI Conference on Artificial Intelligence, AAAI 2017
Y2 - 4 February 2017 through 5 February 2017
ER -