zl程序教程

您现在的位置是:首页 >  数据库

当前栏目

加密:在规模上揭示端到端私人推理的陷阱

2023-03-14 22:31:08 时间

作为一项服务提供深度学习推理的隐私问题突出表明,需要使用加密方法保护用户数据和服务提供商模型的私人推理 (PI) 协议。最近提出的 PI 协议通过将计算重的同质加密 (HE) 部件移到离线/预计算阶段,显著减少了 PI 延迟。与最近为 PI 定制网络的优化相配合,这些协议实现了非常接近实用的性能水平。本文对PI协议和优化技术进行了严格的端到端描述,发现目前对PI性能的理解过于乐观。具体来说,我们发现用户/客户端设备上 PI 中使用的关键加密协议(GC)的离线存储成本高得令人望而却步,迫使大部分昂贵的离线 HE 计算进入在线阶段,导致 PI 延迟增加 10-1000×。我们建议修改 PI 协议,显著降低客户端存储成本,以小幅增加在线延迟。经过端到端评估,修改后的协议通过将 TinyImageNet 上的 ResNet18 的平均 PI 延迟降低 4×,优于当前协议。最后,我们根据调查结果对最近提出的几个 PI 优化进行了讨论,并指出,从端到端的评估中,许多优化实际上增加了 PI 延迟。

原文题目:CryptoNite: Revealing the Pitfalls of End-to-End Private Inference at Scale

原文:The privacy concerns of providing deep learning inference as a service have underscored the need for private inference (PI) protocols that protect users' data and the service provider's model using cryptographic methods. Recently proposed PI protocols have achieved significant reductions in PI latency by moving the computationally heavy homomorphic encryption (HE) parts to an offline/pre-compute phase. Paired with recent optimizations that tailor networks for PI, these protocols have achieved performance levels that are tantalizingly close to being practical. In this paper, we conduct a rigorous end-to-end characterization of PI protocols and optimization techniques and find that the current understanding of PI performance is overly optimistic. Specifically, we find that offline storage costs of garbled circuits (GC), a key cryptographic protocol used in PI, on user/client devices are prohibitively high and force much of the expensive offline HE computation to the online phase, resulting in a 10-1000× increase to PI latency. We propose a modified PI protocol that significantly reduces client-side storage costs for a small increase in online latency. Evaluated end-to-end, the modified protocol outperforms current protocols by reducing the mean PI latency by 4× for ResNet18 on TinyImageNet. We conclude with a discussion of several recently proposed PI optimizations in light of the findings and note many actually increase PI latency when evaluated from an end-to-end perspective.