Deep Reinforcement Learning for Real-Time Assembly Planning in Robot-Based Prefabricated Construction

Aiyu Zhu, Tianhong Dai, Gangyan Xu, Pieter Pauwels, Bauke de Vries, Meng Fang

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)

Abstract

The adoption of robotics is promising to improve the efficiency, quality, and safety of prefabricated construction. Besides technologies that improve the capability of a single robot, the automated assembly planning for robots at construction sites is vital for further improving the efficiency and promoting robots into practices. However, considering the highly dynamic and uncertain nature of a construction environment, and the varied scenarios in different construction sites, it is always challenging to make appropriate and up-to-date assembly plans. Therefore, this paper proposes a Deep Reinforcement Learning (DRL) based method for automated assembly planning in robot-based prefabricated construction. Specifically, a re-configurable simulator for assembly planning is developed based on a Building Information Model (BIM) and an open game engine, which could support the training and testing of various optimization methods. Furthermore, the assembly planning problem is modelled as a Markov Decision Process (MDP) and a set of DRL algorithms are developed and trained using the simulator. Finally, experimental case studies in four typical scenarios are conducted, and the performance of our proposed methods have been verified, which can also serve as benchmarks for future research works within the community of automated construction. Note to Practitioners —This paper is conducted based on the comprehensive analysis of real-life assembly planning processes in prefabricated construction, and the methods proposed could bring many benefits to practitioners. Firstly, the proposed simulator could be easily re-configured to simulate diverse scenarios, which can be used to evaluate and verify the operations’ optimization methods and new construction technologies. Secondly, the proposed DRL-based optimization methods can be directly adopted in various robot-based construction scenarios, and can also be tailored to support the assembly planning in traditional human-based or human-robot construction environments. Thirdly, the proposed DRL methods and their performance in the four typical scenarios can serve as benchmarks for proposing new advanced construction technologies and optimization methods in assembly planning.
Original languageEnglish
Pages (from-to)1515-1526
Number of pages12
JournalIEEE Transactions on Automation Science and Engineering
Volume20
Issue number3
Early online date18 Jan 2023
DOIs
Publication statusPublished - Jul 2023

Bibliographical note

Funding Information:
This article was recommended for publication by Editor X. Xie upon evaluation of the reviewers’ comments. This work was supported in part by the China Scholarship Council under Grant 202007720036 and in part by the National Natural Science Foundation of China under Grant 72174042. An earlier version of this paper was presented in part at the 2021 IEEE International Conference on Automa- tion Science and Engineering [DOI: 10.1109/CASE49439.2021.9551402]. (Aiyu Zhu, Tianhong Dai, and Gangyan Xu contributed equally to this work.

Keywords

  • assembly planning
  • robots
  • prefabricated construction
  • Task analysis
  • real-time systems
  • safety
  • Decision making
  • building information modelling (BIM)

Fingerprint

Dive into the research topics of 'Deep Reinforcement Learning for Real-Time Assembly Planning in Robot-Based Prefabricated Construction'. Together they form a unique fingerprint.

Cite this