zl程序教程

您现在的位置是:首页 >  IT要闻

当前栏目

推荐模型的比例定律:面向通用用户表示

2023-03-15 23:29:05 时间

最近的一个趋势表明,一般的模型,如BERT,GPT-3,CLIP,在大规模的数据上训练,通过单一的学习架构显示了大量的功能。在这项工作中,我们通过大规模训练通用用户编码器来探索通用用户表示学习的可能性。我们证明了标度律在用户建模领域成立,其中训练误差随计算量的幂律。我们的对比学习用户编码器(CLUE),优化了与任务无关的目标,并且由此产生的用户嵌入扩展了我们对在各种下游任务中可能做的事情的期望。CLUE还显示了对其他领域和系统的巨大可转移性,因为在线实验的性能显示了在线点击率(CTR)的显著改进。此外,我们还研究了性能如何根据扩展因素,即模型容量、序列长度和批处理大小而变化。

原文题目:Scaling Law for Recommendation Models: Towards General-purpose User Representations

原文:A recent trend shows that a general class of models, e.g., BERT, GPT-3, CLIP, trained on broad data at scale have shown a great variety of functionalities with a single learning architecture. In this work, we explore the possibility of general-purpose user representation learning by training a universal user encoder at large scales. We demonstrate that the scaling law holds in the user modeling areas, where the training error scales as a power-law with the amount of compute. Our Contrastive Learning User Encoder (CLUE), optimizes task-agnostic objectives, and the resulting user embeddings stretches our expectation of what is possible to do in various downstream tasks. CLUE also shows great transferability to other domains and systems, as performances on an online experiment shows significant improvements in online Click-Through-Rate (CTR). Furthermore, we also investigate how the performance changes according to the scale-up factors, i.e., model capacity, sequence length and batch size.