TY - JOUR
T1 - Data-driven two-layer visual dictionary structure learning
AU - Yu, Xiangchun
AU - Yu, Zhezhou
AU - Wu, Lei
AU - Pang, Wei
AU - Lin, Chenghua
N1 - This work was supported by (1) the Science and Technology Developing Project of Jilin Province, China (Grant No. 20150204007GX) and (2) the Key Laboratory of Symbolic computation and Knowledge Engineering, Ministry of Education.
PY - 2019/3/8
Y1 - 2019/3/8
N2 - An important issue in statistical modeling is to determine the complexity of the model based on the scale of data so as to effectively mitigate the model’s overfitting problems without big data. We adopt a data-driven approach to automatically determine the number of components of the model. In order to better extract robust features, we propose a framework of data-driven two-layer structure visual dictionary learning (DTSVDL). It works by dividing the visual dictionary structure learning into two levels: the attribute layer and the detail layer. In the attribute layer, the attributes of the image dataset are learned, and these attributes are obtained by a data-driven Bayesian nonparametric model. Then, in the detail layer, the detailed information over attributes is further explored and refined, and the attributes are weighted by the number of effective observations associated with each attribute. Our proposed approach has three main advantages: (1) the two-layer structure makes our building visual dictionary be more expressive; (2) the number of components in the attribute layer can be determined automatically from the data; (3) the components are automatically determined based on the scale of visual words; therefore, our model can well mitigate the overfitting problem. In addition, by comparing with stacked autoencoders, stacked denoising autoencoders, LeNet-5, speeded-up robust features, and pretrained deep learning model ImageNet-VGG-F algorithms, we find that our approach achieves satisfactory image categorization results on two benchmark datasets. Specifically, higher categorization performance is achieved than by the classical approaches on 15 scene categories and action datasets. We conclude that the resulting DTSVDL possesses a good generality derived from attribute information as well as an excellent distinction derived from detailed information. In other words, the visual dictionary learned by our algorithm is more expressive and discriminatory.
AB - An important issue in statistical modeling is to determine the complexity of the model based on the scale of data so as to effectively mitigate the model’s overfitting problems without big data. We adopt a data-driven approach to automatically determine the number of components of the model. In order to better extract robust features, we propose a framework of data-driven two-layer structure visual dictionary learning (DTSVDL). It works by dividing the visual dictionary structure learning into two levels: the attribute layer and the detail layer. In the attribute layer, the attributes of the image dataset are learned, and these attributes are obtained by a data-driven Bayesian nonparametric model. Then, in the detail layer, the detailed information over attributes is further explored and refined, and the attributes are weighted by the number of effective observations associated with each attribute. Our proposed approach has three main advantages: (1) the two-layer structure makes our building visual dictionary be more expressive; (2) the number of components in the attribute layer can be determined automatically from the data; (3) the components are automatically determined based on the scale of visual words; therefore, our model can well mitigate the overfitting problem. In addition, by comparing with stacked autoencoders, stacked denoising autoencoders, LeNet-5, speeded-up robust features, and pretrained deep learning model ImageNet-VGG-F algorithms, we find that our approach achieves satisfactory image categorization results on two benchmark datasets. Specifically, higher categorization performance is achieved than by the classical approaches on 15 scene categories and action datasets. We conclude that the resulting DTSVDL possesses a good generality derived from attribute information as well as an excellent distinction derived from detailed information. In other words, the visual dictionary learned by our algorithm is more expressive and discriminatory.
KW - statistical modeling
KW - over-fitting
KW - visual dictionary
KW - Bayesian nonparametric model
KW - deep learning
KW - overfitting
KW - HIERARCHICAL MODEL
KW - WORDS
KW - BAG
KW - LATENT DIRICHLET ALLOCATION
KW - FEATURES
UR - http://www.mendeley.com/research/datadriven-twolayer-visual-dictionary-structure-learning
U2 - 10.1117/1.JEI.28.2.023006
DO - 10.1117/1.JEI.28.2.023006
M3 - Article
VL - 28
JO - Journal of Electronic Imaging
JF - Journal of Electronic Imaging
SN - 1017-9909
IS - 2
M1 - 023006
ER -