TY - JOUR
T1 - Deep Q-network-based multi-criteria decision-making framework for virtual simulation environment
AU - Jang, Hyeonjun
AU - Hao, Shujia
AU - Chu, Phuong Minh
AU - Sharma, Pradip Kumar
AU - Sung, Yunsick
AU - Cho, Kyungeun
N1 - Acknowledgements
This research was supported by a grant from Defense Acquisition Program Administration and Agency for Defense Development, under contract #UE171095RD, and this work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (2018R1A2B2007934).
PY - 2021/9/1
Y1 - 2021/9/1
N2 - Deep learning improves the realistic expression of virtual simulations specifically to solve multi-criteria decision-making problems, which are generally rely on high-performance artificial intelligence. This study was inspired by the motivation theory and natural life observations. Recently, motivation-based control has been actively studied for realistic expression, but it presents various problems. For instance, it is hard to define the relation among multiple motivations and to select goals based on multiple motivations. Behaviors should generally be practiced to take into account motivations and goals. This paper proposes a deep Q-network (DQN)-based multi-criteria decision-making framework for virtual agents in real time to automatically select goals based on motivations in virtual simulation environments and to plan relevant behaviors to achieve those goals. All motivations are classified according to the five-level Maslow’s hierarchy of needs, and the virtual agents train a double DQN by big social data, select optimal goals depending on motivations, and perform behaviors relying on a predefined hierarchical task networks (HTNs). Compared to the state-of-the-art method, the proposed framework is efficient and reduced the average loss from 0.1239 to 0.0491 and increased accuracy from 63.24 to 80.15%. For behavioral performance using predefined HTNs, the number of methods has increased from 35 in the Q network to 1511 in the proposed framework, and the computation time of 10,000 behavior plans reduced from 0.118 to 0.1079 s.
AB - Deep learning improves the realistic expression of virtual simulations specifically to solve multi-criteria decision-making problems, which are generally rely on high-performance artificial intelligence. This study was inspired by the motivation theory and natural life observations. Recently, motivation-based control has been actively studied for realistic expression, but it presents various problems. For instance, it is hard to define the relation among multiple motivations and to select goals based on multiple motivations. Behaviors should generally be practiced to take into account motivations and goals. This paper proposes a deep Q-network (DQN)-based multi-criteria decision-making framework for virtual agents in real time to automatically select goals based on motivations in virtual simulation environments and to plan relevant behaviors to achieve those goals. All motivations are classified according to the five-level Maslow’s hierarchy of needs, and the virtual agents train a double DQN by big social data, select optimal goals depending on motivations, and perform behaviors relying on a predefined hierarchical task networks (HTNs). Compared to the state-of-the-art method, the proposed framework is efficient and reduced the average loss from 0.1239 to 0.0491 and increased accuracy from 63.24 to 80.15%. For behavioral performance using predefined HTNs, the number of methods has increased from 35 in the Q network to 1511 in the proposed framework, and the computation time of 10,000 behavior plans reduced from 0.118 to 0.1079 s.
KW - Behavior planning
KW - Big data
KW - Deep learning
KW - Motivation system
KW - Nature inspired algorithm
UR - http://www.scopus.com/inward/record.url?scp=85084088122&partnerID=8YFLogxK
U2 - 10.1007/s00521-020-04918-3
DO - 10.1007/s00521-020-04918-3
M3 - Article
AN - SCOPUS:85084088122
VL - 33
SP - 10657
EP - 10671
JO - Neural Computing and Applications
JF - Neural Computing and Applications
SN - 0941-0643
ER -