Skip to content


Switch branches/tags

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time


Structural Deep Clustering Network



Due to the limitation of file size, the complete data can be found in

Baidu Netdisk:

graph: 链接: 密码:opc1

data: 链接: 密码:1gd4

Google Drive:




python --name [usps|hhar|reut|acm|dblp|cite]


  • Q: Why do not use distribution Q to supervise distribution P directly?
    A: The reasons are two-fold: 1) Previous method has considered to use the clustering assignments as pseudo labels to re-train the encoder in a supervised manner, i.e., deepCluster. However, in experiment, we find that the gradient of cross-entropy loss is too violent to prevent the embedding spaces from disturbance. 2) Although we can replace the cross-entropy loss with KL divergence, there is still a problem that we worried about, that is, there is no clustering information. The original intention of our research on deep clustering is to integrate the objective of clustering into the powerful representation ability of deep learning. Therefore, we introduce the distribution P to increase the cohesion of clustering performance, the details can be found in DEC.

  • Q: How to apply SDCN to other datasets?
    A: In general, if you want to apply our model to other datasets, three steps are required.

    1. Construct the KNN graph based on the similarity of features. Details can be found in
    2. Pretrain the autoencoder and save the pre-trained model. Details can be found in data/
    3. Replace the args in and run the code.


If you make advantage of the SDCN model in your research, please cite the following in your manuscript:

  author    = {Deyu Bo and
               Xiao Wang and
               Chuan Shi and
               Meiqi Zhu and
               Emiao Lu and
               Peng Cui},
  title     = {Structural Deep Clustering Network},
  booktitle = {{WWW}},
  pages     = {1400--1410},
  publisher = {{ACM} / {IW3C2}},
  year      = {2020}