报告人:张弘扬
助理教授
加拿大滑铁卢大学
主持人:林宙辰 教授
北京大学智能学院
时 间:2023/9/18 13:30 - 14:30
地 址:北京大学昌平校区教学楼209教室
腾讯会议:562 985 706
报告题目:The Emergence of Property Concerns
报告摘要:
The widespread use of foundation models has raised concerns about the use of legitimate data for training purposes. In March 2023, OpenAI was required to prove to the public their training data and logics are legitimate, but the company also wants to keep ChatGPT weights and training data confidential. In response to this challenge, we present zkDL, an efficient zero-knowledge proof of deep learning. At the core of zkDL is zkReLU, a specialized zero-knowledge proof protocol with optimized proving time and proof size for the ReLU activation function, a major obstacle in verifiable training of machine learning due to its non-arithmetic nature. To integrate zkReLU into the proof system for the entire training process, we devise a novel construction of an arithmetic circuit from neural networks. This construction reduces proving time and proof sizes by a factor of the network depth. As a result, zkDL enables the generation of complete and sound proofs, taking less than a second per training/inference step for a 20M-parameter neural network, while ensuring the privacy of data and model parameters. The new CUDA implementation of zkDL gets a 400X speedup on an NVIDIA A100 GPU compared to the previous SoTA implementations.
报告人简介:
张弘扬,2015年于北京大学智能科学系获得硕士学位,2019年在美国卡内基梅隆大学机器学习系获博士学位,2019至2021年在芝加哥丰田技术研究院从事博士后研究。2021年加入加拿大滑铁卢大学计算机科学系任助理教授,同时兼任加拿大向量AI研究院客座教授。最近的主要研究领域为机器学习、人工智能安全、大语言模型。获2018年度NeurIPS Adversarial Vision Challenge全球冠军,2021年度CVPR Security AI Challenger全球冠军,2022年度加拿大Discovery Award,2023年度Amazon Research Award、WAIC云帆奖等。根据2023谷歌学术指标,其单篇论文在ICML 5年近万篇论文中引用量排名第13。多次担任NeurIPS、ACM CCS、AISTATS、ALT、AAAI等会议的领域主席或高级程序委员会成员。