The goal of the AAAI-20 Workshop on Privacy-Preserving Artificial Intelligence is to provide a platform for researchers to discuss problems and present solutions related to privacy issues arising within AI applications. The workshop will focus on both theoretical and practical challenges arising in the design of privacy-preserving AI systems and algorithms. It will place particular emphasis on algorithmic approaches to protect data privacy in the context of learning, optimization, and decision making that raise fundamental challenges for existing technologies. Additionally, it will welcome algorithms and frameworks to release privacy-preserving benchmarks and datasets.
Topics of Interest
We invite paper submissions on the following (and related) topics:- Applications of privacy-preserving AI systems
- Architectures and privacy-preserving learning protocols
- Constrained-based approaches to privacy
- Differential privacy: theory and applications
- Distributed privacy-preserving algorithms
- Human-aware private algorithms
- Incentive mechanisms and game theory
- Privacy-preserving machine learning
- Privacy-preserving algorithms for medical applications
- Privacy-preserving algorithms for temporal data
- Privacy-preserving test cases and benchmarks
- Privacy and policy-making
- Secure multi-party computation
- Secret sharing techniques
- Trade-offs between privacy and utility
Submission Information
Submission URL: https://easychair.org/conferences/?conf=ppai20
Submission Types
- Technical Papers: Full-length research papers of up to 7 pages (excluding references and appendices) detailing high quality work in progress or work that could potentially be published at a major conference.
- Short Papers: Position or short papers of up to 4 pages (excluding references and appendices) that describe initial work or the release of privacy-preserving benchmarks and datasets on the topics of interest.
All papers must be submitted in PDF format, using the AAAI-20 author kit.
Submissions should include the name(s), affiliations, and email addresses of all authors.
Submissions will be refereed on the basis of technical quality, novelty, significance, and clarity. Each submission will be thoroughly reviewed by at least two program committee members.
Submissions of papers rejected from the AAAI 2020 technical program are welcomed.
Papers will be selected for oral and/or poster presentation at the workshop.
For questions about the submission process, contact the workshop co-chairs.
Important Dates
- November 15, 2019 (AOE) - Submission Deadline
- December 4, 2019 - Acceptance Notification
- February 7, 2020 - Workshop Date (Full day)
Accepted Papers
Accepted Oral Presentation
- Gilie Gefen, Omer Ben-Porat, Moshe Tennenholtz and Elad Yom-Tov
Assessing the Value of Internet Data for Medical Applications - Kai Wen Wang, Travis Dick and Maria-Florina Balcan
Scalable and provably accurate algorithms for differentially private distributed decision tree learning - Reza Shokri, Martin Strobel and Yair Zick
Exploiting Transparency Measures for Membership Inference: a Cautionary Tale - Shubhankar Mohapatra, Xi He, Gautam Kamath and Om Thakkar
Diffindo! Differentially Private Learning with Noisy Labels [Removed by authors' request] - Chaitali Ashok Choudhary, Martine De Cock, Rafael Dowsley, Anderson Nascimento and Davis Railsback
Secure Training of Extra Trees Classifiers over Continuous Data - Dominik Fay, Jens Sjölund and Tobias J. Oechtering
Private Learning for High-Dimensional Targets with PATE
Accepted Poster Presentations
- Qiu Yuchen, Yuanyuan Qiao, Aimin Zhang and Jie Yang
Residence and Workplace Recovery: User Privacy Risk in Mobility Data - Hanten Chang and Hiroyasu Ando
Privacy Preserving Data Sharing by Integrating Perturbed Distance Matrices - Shreya Sharma, Xing Chaoping and Yang Liu
Privacy-Preserving Deep Learning with SPDZ - Liyue Fan
A Survey of Differentially Private Generative Adversarial Networks - Colin Wan, Zheng Li, Alicia Guo and Yue Zhao
SynC: A Unified Framework for Generating Synthetic Population with Gaussian Copula - Ashish Dandekar, Debabrota Basu and Stephane Bressan
Differential Privacy at Risk: Bridging Randomness and Privacy Budget - Ulrich Aïvodji, Sébastien Gambs and Timon Ther
GAMIN: An Adversarial Approach to Black-Box Model Inversion - Longfei Zheng, Chaochao Chen, Yingting Liu, Bingzhe Wu, Xibin Wu, Li Wang, Lei Wang and Jun Zhou
Industrial Scale Privacy Preserving Deep Neural Network - Yingting Liu, Chaochao Chen, Longfei Zheng, Li Wang and Jun Zhou
Privacy Preserving PCA for Multiparty Modeling - Clémence Mauger, Gaël Le Mahec and Gilles Dequen
Modeling and Evaluation of k-anonymization Metrics - Aleksei Triastcyn and Boi Faltings
Bayesian Differential Privacy for Machine Learning - Himanshu Arora
Guided PATE for Scalable Learning - Adam Richardson, Aris Filos-Ratsikas, Ljubomir Rokvic and Boi Faltings
Privately Computing Influence in Regression Models - Hui Hu and Chao Lan
Inference Attack and Defense Mechanisms on the Distributed Private Fair Machine Learning Framework - Yulin Zhang and Dylan Shell
Plans that Remain Private Even in Hindsight - Junhong Cheng, Wenyan Liu, Xiaoling Wang, Xingjian Lu, Jing Feng and Yi Li
Adaptive Distributed Differential Privacy with SGD
Technical Program
Location: Hilton New York Midtown, New York, NY, USARoom: Clinton (on the second floor of the hotel).
- 8:45 - 9:00: Poster Setup and Opening Statement
- 9:00 - 9:45: Invited Talk: Catuscia Palamidessi
- 9:45 - 10:30:
Session I
Session Chair: TBA- Gilie Gefen, Omer Ben-Porat, Moshe Tennenholtz and Elad Yom-Tov.
Assessing the Value of Internet Data for Medical Applications. -
Reza Shokri, Martin Strobel and Yair Zick.
Exploiting Transparency Measures for Membership Inference: a Cautionary Tale. -
Shubhankar Mohapatra, Xi He, Gautam Kamath and Om Thakkar.
Diffindo! Differentially Private Learning with Noisy Labels.
- Gilie Gefen, Omer Ben-Porat, Moshe Tennenholtz and Elad Yom-Tov.
- 10:30 - 11:00: Break* and Poster Session
- 11:00 - 11:45: Invited Talk: Boi Faltings
- 11:45 - 12:30: Poster Session
- 12:30 - 13:50: Lunch (not sponsored)
- 13:50 - 14:45: Panel Discussion
- 14:45 - 15:30: Invited Talk: Aleksandar Nikolov
- 15:30 - 16:00: Break* and Poster Session
- 16:00 - 16:45: Poster Session
- 16:45 - 17:30:
Session II
Session Chair: TBA- Kai Wen Wang, Travis Dick and Maria-Florina Balcan.
Scalable and provably accurate algorithms for differentially private distributed decision tree learning. - Chaitali Ashok Choudhary, Martine De Cock, Rafael Dowsley, Anderson Nascimento and Davis Railsback.
Secure Training of Extra Trees Classifiers over Continuous Data. - Dominik Fay, Jens Sjölund and Tobias J. Oechtering.
Private Learning for High-Dimensional Targets with PATE.
- Kai Wen Wang, Travis Dick and Maria-Florina Balcan.
- 17:30: End of Workshop
*Coffee Breaks will be provided in the East Corridor Second Floor and the Concourse Floor.
Invited Speakers
Privacy-Preserving Constraint Optimization
Artificial Intelligence can play an important role in modern society as a mediator between
different parties, such as auctions and coordination mechanisms. However, the preferences and
constraints involved in such mediation are private information and must be protected from leakage
both to the mediator and to the other parties. I present different solutions to this problem
based on homomorphic encryption and multiparty computation, and discuss open issues for further research.
Bio:
Boi Faltings is a full professor of computer science at the Ecole Polytechnique Fédérale de Lausanne (EPFL), where he heads the Artificial Intelligence Laboratory. He has held visiting positions at NEC Research Institute, Stanford University and the HongKong University of Science and Technology. He has co-founded 6 companies using AI for e-commerce and computer security and acted as advisor to several other companies. Prof. Faltings has published over 300 refereed papers and graduated over 35 Ph.D. students, several of which have won national and international awards. He is a fellow of the European Coordinating Committee for Artificial Intelligence and a fellow of the Association for Advancement of Artificial Intelligence (AAAI). He holds a Diploma from ETH Zurich and a Ph.D. from the University of Illinois at Urbana-Champaign.
Machine learning and privacy: friends or enemies?
Recently a lot of research effort has been dedicated to show the risk for privacy connected to the use of machine learning. In this talk, I will explore the opposite perspective, and argue that machine learning can actually be useful for privacy. In particular, I will discuss
(1) How machine learning can help to estimate the leakage of private information in the black-box model, and
(2) How machine learning can help to construct mechanisms for privacy protection that approximate an optimal trade-off between privacy and utility.
Bio:
Catuscia Palamidessi is Director of Research at INRIA Saclay (since 2002), where she leads the team COMETE.
She got her PhD from the University of Pisa in 1988. She held Full Professor positions at the University of Genova, Italy (1994-1997) and at the Pennsylvania State University, USA (1998-2002).
Palamidessi's research interests include Privacy, Secure Information Flow, and Concurrency.
Her past achievements include the proof of expressiveness gaps between various concurrent calculi, and
the development of a probabilistic version of the asynchronous pi-calculus.
More recently, she has contributed to establish the foundations of probabilistic secure information flow,
she has proposed an extension of differential privacy, and geo-indistinguishability, an approach to location privacy.
In 2019 she has received an advanced ERC grant.
The Power of Factorization Mechanisms for Answering Counting Queries
Many tasks in private data analysis can be reduced to answering a collection of counting queries, i.e. queries that ask what fraction of the dataset satisfies a given property. Counting queries, for example, capture contingency tables and CDFs, and can be used to implement learning algorithms in the Statistical Query model. A basic method to answer counting queries with differential privacy is to add IID Laplace or Gaussian noise to the query answers. Often, however, one can get much better error guarantees by instead answering a different set of “strategy queries” with IID noise, and then reconstructing answers to the original queries. Optimal strategy queries can be usually computed efficiently using convex optimization. The resulting factorization mechanisms give optimal error vs privacy trade-offs in various models of differential privacy and parameter regimes. In this talk, I will give a flavor of factorization mechanisms, and what we can prove about them.
Bio:
Aleksandar (Sasho) Nikolov is an assistant professor at the University of Toronto. Sasho received his PhD from Rutgers University, where his supervisor was S. Muthukrishnan, and did a postdoc with the Theory Group at Microsoft Research in Redmond. He is a Canada Research Chair in Algorithms and Privacy, and from 2012-14 was a Simons Graduate Fellow in Computer Science. Sasho is broadly interested in theoretical computer science and algorithms, and specifically in differential privacy, discrepancy theory, convex geometry and geometric algorithms.
Panel Discussion
Grand Challenges in Privacy in 2020: What are they and what are we missing?
Panelists:- Boi Faltings (EPFL)
- Antonis Papadimitriou (Duality Technologies)
- Sébastien Gambs (Universite du Quebec a Montreal)
- Helen Toner (Georgetown's Center for Security and Emerging Technology)
Program Committee
Contact
Workshop Chairs:- Ferdinando Fioretto - Syracuse University
- Pascal Van Hentenryck - Georgia Institute of Technology
- Rachel Cummings - Georgia Institute of Technology