TY - GEN
T1 - Load Disaggregation Based on Sequence-to-point Network with Unsupervised Pre-training
AU - Chen, Shuyi
AU - Zhao, Bochao
AU - Luan, Wenpeng
AU - Zhong, Mingjun
N1 - Funding Information:
This work is supported in part by the National Natural Science Foundation of China (Key Program), NSFC-SGCC (U2066207).
PY - 2021
Y1 - 2021
N2 - It is known that successful load disaggregation via deep learning relies on a large number of labeled data to train the deep neural networks. However, it is hard and expensive to acquire a large amount of appliance-level power data or ON/OFF labels. For overcoming such weakness, in this paper, unsupervised pre-training is applied to the state-of-the-art sequence-to-point (s2p) deep learning approach for NILM, where labeling is not required in pre-training. In the proposed method, the s2p deep neural network is initially pre-trained on unlabeled aggregate power readings for other houses, and then fine-tuned on a small set of aggregate power for the target house labeled by individual appliance monitoring. Finally, the generated network is applied to testing with the aggregate for the target house as input and outputting the power signal for the target load. The proposed method is validated on the UK REFIT dataset1, benchmark with s2p in two popular evaluation metrics. Experimental results show that the proposed unsupervised pre-training effectively improves NILM performance of the deep neural network with a lack of labeled training data.
AB - It is known that successful load disaggregation via deep learning relies on a large number of labeled data to train the deep neural networks. However, it is hard and expensive to acquire a large amount of appliance-level power data or ON/OFF labels. For overcoming such weakness, in this paper, unsupervised pre-training is applied to the state-of-the-art sequence-to-point (s2p) deep learning approach for NILM, where labeling is not required in pre-training. In the proposed method, the s2p deep neural network is initially pre-trained on unlabeled aggregate power readings for other houses, and then fine-tuned on a small set of aggregate power for the target house labeled by individual appliance monitoring. Finally, the generated network is applied to testing with the aggregate for the target house as input and outputting the power signal for the target load. The proposed method is validated on the UK REFIT dataset1, benchmark with s2p in two popular evaluation metrics. Experimental results show that the proposed unsupervised pre-training effectively improves NILM performance of the deep neural network with a lack of labeled training data.
KW - deep neural network
KW - non-intrusive load monitoring
KW - sequence-to-point learning
KW - unsupervised pre-training
UR - http://www.scopus.com/inward/record.url?scp=85128202440&partnerID=8YFLogxK
U2 - 10.1109/EI252483.2021.9713549
DO - 10.1109/EI252483.2021.9713549
M3 - Published conference contribution
AN - SCOPUS:85128202440
T3 - 5th IEEE Conference on Energy Internet and Energy System Integration: Energy Internet for Carbon Neutrality, EI2 2021
SP - 3224
EP - 3229
BT - 5th IEEE Conference on Energy Internet and Energy System Integration
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 5th IEEE Conference on Energy Internet and Energy System Integration, EI2 2021
Y2 - 22 October 2021 through 25 October 2021
ER -