zl程序教程

您现在的位置是:首页 >  其他

当前栏目

保护隐私的无服务器边缘学习与分散的小数据

2023-03-20 15:40:15 时间

在过去的十年中,数据驱动的算法在许多研究领域胜过了传统的基于优化的算法,如计算机视觉、自然语言处理等。然而,广泛的数据使用给深度学习算法带来了新的挑战,甚至是威胁,即隐私保护。分布式训练策略最近成为一种有前途的方法,以确保训练深度模型时的数据隐私。本文用无服务器边缘学习架构扩展了传统的无服务器平台,并从网络角度提供了一个高效的分布式训练框架。该框架在异构物理单元之间动态地协调可用资源,以有效地实现深度学习目标。该设计共同考虑了学习任务请求和底层基础设施的异质性,包括最后一英里的传输、移动设备的计算能力、边缘和云计算中心以及设备电池状态。此外,为了大大减少分布式训练的开销,通过与一般的、简单的数据分类器整合,提出了小规模的数据训练。这种低负荷的增强可以与各种分布式深度模型无缝对接,以提高训练阶段的通信和计算效率。最后,开放的挑战和未来的研究方向鼓励研究界开发高效的分布式深度学习技术。

原文题目:Privacy-Preserving Serverless Edge Learning with Decentralized Small Data

原文:In the last decade, data-driven algorithms outperformed traditional optimization-based algorithms in many research areas, such as computer vision, natural language processing, etc. However, extensive data usages bring a new challenge or even threat to deep learning algorithms, i.e., privacy-preserving. Distributed training strategies have recently become a promising approach to ensure data privacy when training deep models. This paper extends conventional serverless platforms with serverless edge learning architectures and provides an efficient distributed training framework from the networking perspective. This framework dynamically orchestrates available resources among heterogeneous physical units to efficiently fulfill deep learning objectives. The design jointly considers learning task requests and underlying infrastructure heterogeneity, including last-mile transmissions, computation abilities of mobile devices, edge and cloud computing centers, and devices battery status. Furthermore, to significantly reduce distributed training overheads, small-scale data training is proposed by integrating with a general, simple data classifier. This low-load enhancement can seamlessly work with various distributed deep models to improve communications and computation efficiencies during the training phase. Finally, open challenges and future research directions encourage the research community to develop efficient distributed deep learning techniques.