zl程序教程

您现在的位置是:首页 >  IT要闻

当前栏目

具有自我稀释功能的领域诊断聚类法

2023-03-20 14:50:46 时间

自监督学习的最新进展缩小了监督和非监督表示学习之间的差距。然而,大多数自监督和深度聚类技术严重依赖数据增强,使得它们对许多学习任务无效,因为在这些任务中,没有足够的领域知识来执行增强。我们提出了一种新的基于自我蒸馏的算法,用于领域无关的聚类。我们的方法建立在现有的深度聚类框架上,不需要单独的学生模型。所提出的方法在CIFAR-10上优于现有的领域不可知(无扩增)算法。我们根据经验证明,知识提炼可以通过从模型中提取比单独使用预测标签更丰富的 "黑暗知识 "来改善无监督的表示学习。初步实验还表明,自我蒸馏改善了DeepCluster-v2的收敛性。

原文题目:Domain-Agnostic Clustering with Self-Distillation

原文:Recent advancements in self-supervised learning have reduced the gap between supervised and unsupervised representation learning. However, most self-supervised and deep clustering techniques rely heavily on data augmentation, rendering them ineffective for many learning tasks where insufficient domain knowledge exists for performing augmentation. We propose a new self-distillation based algorithm for domain-agnostic clustering. Our method builds upon the existing deep clustering frameworks and requires no separate student model. The proposed method outperforms existing domain agnostic (augmentation-free) algorithms on CIFAR-10. We empirically demonstrate that knowledge distillation can improve unsupervised representation learning by extracting richer `dark knowledge' from the model than using predicted labels alone. Preliminary experiments also suggest that self-distillation improves the convergence of DeepCluster-v2.