zl程序教程

您现在的位置是:首页 >  IT要闻

当前栏目

从分散的(联合的)和集中的数据中联合学习以减轻分布转移

2023-03-20 14:50:41 时间

以隐私为动机,联合学习(FL)是一种越来越多使用的范式,学习在边缘设备上集体进行,每个设备都有一个用户生成的训练实例的缓存,并保持在本地设备上。这些设备上的训练实例是在用户与设备互动的过程中就地收集的,因此高度反映了至少部分推理数据的分布。然而,分布的变化可能仍然存在;设备上的训练实例可能缺乏在推理时预期会遇到的一些数据输入。本文提出了一种缓解这种转变的方法:选择性地使用数据中心数据,与FL混合。通过混合分散的(联盟的)和集中的(数据中心的)数据,我们可以形成一个有效的训练数据分布,更好地匹配推理数据分布,从而产生更有用的模型,同时仍然满足FL所施加的私人训练数据访问限制。

原文题目:Jointly Learning from Decentralized (Federated) and Centralized Data to Mitigate Distribution Shift

原文:With privacy as a motivation, Federated Learning (FL) is an increasingly used paradigm where learning takes place collectively on edge devices, each with a cache of user-generated training examples that remain resident on the local device. These on-device training examples are gathered in situ during the course of users' interactions with their devices, and thus are highly reflective of at least part of the inference data distribution. Yet a distribution shift may still exist; the on-device training examples may lack for some data inputs expected to be encountered at inference time. This paper proposes a way to mitigate this shift: selective usage of datacenter data, mixed in with FL. By mixing decentralized (federated) and centralized (datacenter) data, we can form an effective training data distribution that better matches the inference data distribution, resulting in more useful models while still meeting the private training data access constraints imposed by FL.