The availability of massive amounts of data, coupled with high-performance cloud computing platforms, has driven significant progress in artificial intelligence and, in particular, machine learning and optimization. Indeed, much scientific and technological growth in recent years, including in computer vision, natural language processing, transportation, and health, has been driven by large-scale data sets which provide a strong basis to improve existing algorithms and develop new ones. However, due to their large-scale and longitudinal collection, archiving these data sets raise significant privacy concerns. They often reveal sensitive personal information that can be exploited, without the knowledge and/or consent of the involved individuals, for various purposes including monitoring, discrimination, and illegal activities.
The goal of the AAAI-20 Workshop on Privacy-Preserving Artificial Intelligence is to provide a platform for researchers to discuss problems and present solutions related to privacy issues arising within AI applications. The workshop will focus on both theoretical and practical challenges arising in the design of privacy-preserving AI systems and algorithms. It will place particular emphasis on algorithmic approaches to protect data privacy in the context of learning, optimization, and decision making that raise fundamental challenges for existing technologies. Additionally, it will welcome algorithms and frameworks to release privacy-preserving benchmarks and datasets.

Topics of Interest

We invite paper submissions on the following (and related) topics:
  • Applications of privacy-preserving AI systems
  • Architectures and privacy-preserving learning protocols
  • Constrained-based approaches to privacy
  • Differential privacy: theory and applications
  • Distributed privacy-preserving algorithms
  • Human-aware private algorithms
  • Incentive mechanisms and game theory
  • Privacy-preserving machine learning
  • Privacy-preserving algorithms for medical applications
  • Privacy-preserving algorithms for temporal data
  • Privacy-preserving test cases and benchmarks
  • Privacy and policy-making
  • Secure multi-party computation
  • Secret sharing techniques
  • Trade-offs between privacy and utility
Position, perspective, and vision papers are also welcome. Finally, the workshop will welcome papers that describe the release of privacy-preserving benchmarks and datasets that can be used by the community to solve fundamental problems of interest, including in machine learning and optimization for health systems and urban networks, to mention but a few examples.

Submission Information

Submission URL: https://easychair.org/conferences/?conf=ppai20

Submission Types

  • Technical Papers: Full-length research papers of up to 7 pages (excluding references and appendices) detailing high quality work in progress or work that could potentially be published at a major conference.
  • Short Papers: Position or short papers of up to 4 pages (excluding references and appendices) that describe initial work or the release of privacy-preserving benchmarks and datasets on the topics of interest.

All papers must be submitted in PDF format, using the AAAI-20 author kit. Submissions should include the name(s), affiliations, and email addresses of all authors.
Submissions will be refereed on the basis of technical quality, novelty, significance, and clarity. Each submission will be thoroughly reviewed by at least two program committee members.
Submissions of papers rejected from the AAAI 2020 technical program are welcomed. Papers will be selected for oral and/or poster presentation at the workshop.

For questions about the submission process, contact the workshop co-chairs.

Important Dates

  • November 15, 2019 (AOE) - Submission Deadline
  • December 4, 2019 - Acceptance Notification
  • February 7, 2020 - Workshop Date (Full day)

Accepted Papers

Accepted Oral Presentation

Accepted Poster Presentations

Technical Program

Location: Hilton New York Midtown, New York, NY, USA
Room: Clinton (on the second floor of the hotel).

*Coffee Breaks will be provided in the East Corridor Second Floor and the Concourse Floor.

Invited Speakers

  • Boi Faltings (EPFL)

    Privacy-Preserving Constraint Optimization
    Artificial Intelligence can play an important role in modern society as a mediator between different parties, such as auctions and coordination mechanisms. However, the preferences and constraints involved in such mediation are private information and must be protected from leakage both to the mediator and to the other parties. I present different solutions to this problem based on homomorphic encryption and multiparty computation, and discuss open issues for further research.

    Bio:  Boi Faltings is a full professor of computer science at the Ecole Polytechnique Fédérale de Lausanne (EPFL), where he heads the Artificial Intelligence Laboratory. He has held visiting positions at NEC Research Institute, Stanford University and the HongKong University of Science and Technology. He has co-founded 6 companies using AI for e-commerce and computer security and acted as advisor to several other companies. Prof. Faltings has published over 300 refereed papers and graduated over 35 Ph.D. students, several of which have won national and international awards. He is a fellow of the European Coordinating Committee for Artificial Intelligence and a fellow of the Association for Advancement of Artificial Intelligence (AAAI). He holds a Diploma from ETH Zurich and a Ph.D. from the University of Illinois at Urbana-Champaign.

  • Catuscia Palamidessi (INRIA)

    Machine learning and privacy: friends or enemies?
    Recently a lot of research effort has been dedicated to show the risk for privacy connected to the use of machine learning. In this talk, I will explore the opposite perspective, and argue that machine learning can actually be useful for privacy. In particular, I will discuss
    (1) How machine learning can help to estimate the leakage of private information in the black-box model, and
    (2) How machine learning can help to construct mechanisms for privacy protection that approximate an optimal trade-off between privacy and utility.

    Bio:  Catuscia Palamidessi is Director of Research at INRIA Saclay (since 2002), where she leads the team COMETE. She got her PhD from the University of Pisa in 1988. She held Full Professor positions at the University of Genova, Italy (1994-1997) and at the Pennsylvania State University, USA (1998-2002). Palamidessi's research interests include Privacy, Secure Information Flow, and Concurrency. Her past achievements include the proof of expressiveness gaps between various concurrent calculi, and the development of a probabilistic version of the asynchronous pi-calculus. More recently, she has contributed to establish the foundations of probabilistic secure information flow, she has proposed an extension of differential privacy, and geo-indistinguishability, an approach to location privacy. In 2019 she has received an advanced ERC grant.

  • Aleksandar Nikolov (University of Toronto)

    The Power of Factorization Mechanisms for Answering Counting Queries
    Many tasks in private data analysis can be reduced to answering a collection of counting queries, i.e. queries that ask what fraction of the dataset satisfies a given property. Counting queries, for example, capture contingency tables and CDFs, and can be used to implement learning algorithms in the Statistical Query model. A basic method to answer counting queries with differential privacy is to add IID Laplace or Gaussian noise to the query answers. Often, however, one can get much better error guarantees by instead answering a different set of “strategy queries” with IID noise, and then reconstructing answers to the original queries. Optimal strategy queries can be usually computed efficiently using convex optimization. The resulting factorization mechanisms give optimal error vs privacy trade-offs in various models of differential privacy and parameter regimes. In this talk, I will give a flavor of factorization mechanisms, and what we can prove about them.

    Bio:  Aleksandar (Sasho) Nikolov is an assistant professor at the University of Toronto. Sasho received his PhD from Rutgers University, where his supervisor was S. Muthukrishnan, and did a postdoc with the Theory Group at Microsoft Research in Redmond. He is a Canada Research Chair in Algorithms and Privacy, and from 2012-14 was a Simons Graduate Fellow in Computer Science. Sasho is broadly interested in theoretical computer science and algorithms, and specifically in differential privacy, discrepancy theory, convex geometry and geometric algorithms.


  • Panel Discussion

    Grand Challenges in Privacy in 2020: What are they and what are we missing?

    Panelists:

    Program Committee

  • Aws Albarghouthi - University of Wisconsin-Madison
  • Carsten Baum - Bar Ilan University
  • Aurélien Bellet - INRIA
  • Elette Boyle - Technion
  • Mark Bun - Boston University
  • Kamalika Chaudhuri - University of California San Diego
  • Graham Cormode - The University of Warwick
  • Marco Gaboardi - Boston University
  • Antti Honkela - University of Helsinki
  • Dali Kaafar - Data61-CSIRO
  • Peter Kairouz - Google AI
  • Kim Laine - Microsoft
  • Audra McMillan - Northeastern University
  • Sebastian Meiser - University College London
  • Ilya Mironov - Google
  • Aleksandar Nikolov - University of Toronto
  • Kobbi Nissim - Georgetown University
  • Catuscia Palamidessi - INRIA
  • Reza Shokri - National University of Singapore
  • Jonathan Ullman - Northeastern University
  • Xiao Wang - Northwestern University
  • Zhiwei Steven Wu - University of Minnesota
  • Contact

    Workshop Chairs: contact us