Scope and Topics

The availability of massive amounts of data, coupled with high-performance cloud computing platforms, has driven significant progress in artificial intelligence and, in particular, machine learning and optimization. It has profoundly impacted several areas, including computer vision, natural language processing, and transportation. However, the use of rich data sets also raises significant privacy concerns: They often reveal personal sensitive information that can be exploited, without the knowledge and/or consent of the involved individuals, for various purposes including monitoring, discrimination, and illegal activities.
The third AAAI Workshop on Privacy-Preserving Artificial Intelligence (PPAI-22) held at the Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI-22) builds on the success of previous years AAAI PPAI-20 and AAAI PPAI-21 to provide a platform for researchers, AI practitioners, and policymakers to discuss technical and societal issues and present solutions related to privacy in AI applications. The workshop will focus on both the theoretical and practical challenges related to the design of privacy-preserving AI systems and algorithms and will have strong multidisciplinary components, including soliciting contributions about policy, legal issues, and societal impact of privacy in AI.

PPAI-22 will place particular emphasis on:
  1. Algorithmic approaches to protect data privacy in the context of learning, optimization, and decision making that raise fundamental challenges for existing technologies.
  2. Social issues related to tracking, tracing, and surveillance programs.
  3. Algorithms and frameworks to release privacy-preserving benchmarks and data sets.

Topics

The workshop organizers invite paper submissions on the following (and related) topics:
  • Applications of privacy-preserving AI systems
  • Attacks on data privacy
  • Differential privacy: theory and applications
  • Distributed privacy-preserving algorithms
  • Human rights and privacy
  • Privacy and Fairness
  • Privacy policies and legal issues
  • Privacy preserving optimization and machine learning
  • Privacy preserving test cases and benchmarks
  • Surveillance and societal issues

Finally, the workshop will welcome papers that describe the release of privacy-preserving benchmarks and data sets that can be used by the community to solve fundamental problems of interest, including in machine learning and optimization for health systems and urban networks, to mention but a few examples.

Format

The workshop will be a one-day meeting. The workshop will include a number of technical sessions, a poster session where presenters can discuss their work, with the aim of further fostering collaborations, multiple invited speakers covering crucial challenges for the field of privacy-preserving AI applications, including policy and societal impacts, a number of tutorial talks, and will conclude with a panel discussion.

Important Dates

  • November 23, 2021 – Submission Deadline [Extended]
  • December 5, 2021 – NeurIPS/AAAI Fast Track Submission Deadline
  • December 16, 2021 December 20, 2021 – Acceptance Notification
  • December 31, 2021 – AAAI Early registration deadline
  • February 28, 2022 – Workshop Date

Submission Information

Submission URL: https://cmt3.research.microsoft.com/PPAI2022

Submission Types

  • Technical Papers: Full-length research papers of up to 7 pages (excluding references and appendices) detailing high quality work in progress or work that could potentially be published at a major conference.
  • Short Papers: Position or short papers of up to 4 pages (excluding references and appendices) that describe initial work or the release of privacy-preserving benchmarks and datasets on the topics of interest.

NeurIPS/AAAI Fast Track (Rejected AAAI papers)

Rejected NeurIPS/AAAI papers with *average* scores of at least 4.5 may be asubmitted directly to PPAI along with previous reviews. These submissions may go through a light review process or accepted if the provided reviews are judged to meet the workshop standard.

All papers must be submitted in PDF format, using the AAAI-22 author kit. Submissions should include the name(s), affiliations, and email addresses of all authors.
Submissions will be refereed on the basis of technical quality, novelty, significance, and clarity. Each submission will be thoroughly reviewed by at least two program committee members.
Submissions of papers rejected from the AAAI 2022 technical program are welcomed.

For questions about the submission process, contact the workshop chairs.

Registration

Registration in each workshop is required by all active participants, and is also open to all interested individuals. Early registration deadline is on December 31th. For more information please refer to AAAI-22 Workshop page.

Program

All times are in Eastern Standard Time (UTC-5).

Invited talks, Tutorials, and Panel discussion: Will be live streamed. Access via Virtual Chair.
Spotlights and Poster Talks: Are pre-recorded and will be accessible at any time. There will be additional Q&A and discussion at the poster sessions.
Poster sessions: are hosted on Virtual Chair (Room Blue 2).

Time Talk / Presenter
09:50 Introductory remarks
10:00 Invited Talk: "A bottom-up approach to making differential privacy ubiquitous" by Damien Desfontaines
Session chair: Ferdinando Fioretto
10:45 Spotlight Talk: Differential privacy and robust statistics in high dimensions
11:00 Spotlight Talk: Element Level Differential Privacy: The Right Granularity of Privacy
11:15 Break
Session chair: Fatemeh Mireshghallah
11:30 Spotlight Talk: A Fairness Analysis on Private Aggregation of Teacher Ensembles
11:45 Spotlight Talk: Benchmarking Differentially Private Synthetic Data Generation Algorithms
12:00 Tutorial: "Differentially Private Deep Learning, Theory, Attacks, and PyTorch Implementation" by Ilya Mironov, Alexandre Sablayrolles, and Igor Shilov
13:45 Flash Poster Presentations
14:00 Poster Session (on VirtualChair Room Blue 2)
15:00 Invited Talk: "When is Memorization Necessary for Machine Learning?" by Adam Smith
Session chair: Xi He
15:45 Spotlight Talk: Calibration with Privacy in Peer Review: A Theoretical Study
16:00 Spotlight Talk: APRIL: Finding the Achilles’ Heel on Privacy for Vision Transformers
16:15 Break
16:30 Invited Talk: "Personal Privacy and the Public Good" by Claire McKay Bowen: Personal Privacy and the Public Good
17:30 Panel Discussion: Differential Privacy and its disparate impacts.
18:20 Concluding Remarks and Poster Session (on VirtualChair Room Blue 2)

Accepted Papers

Spotlight Presentations
  • Differential privacy and robust statistics in high dimensions [ArXiv]
    Xiyang Liu (University of Washington); Weihao Kong (University of Washington); Sewoong Oh (University of Washington)
  • Element Level Differential Privacy: The Right Granularity of Privacy [ArXiv]
    Hilal Asi (Stanford University); John Duchi (Stanford University); Omid Javidbakht (Apple)
  • A Fairness Analysis on Private Aggregation of Teacher Ensembles [ArXiv]
    Cuong Tran (Syracuse University); My H Dinh (Syracuse University); Kyle Beiter (Syracuse University); Ferdinando Fioretto (Syracuse University)
  • Benchmarking Differentially Private Synthetic Data Generation Algorithms [ArXiv]
    Yuchao Tao (Duke University); Ryan McKenna (University of Massachusetts, Amherst); Michael Hay (Tumult Labs); Ashwin Machanavajjhala (Tumult Labs); Gerome Miklau (Tumult Labs)
  • Calibration with Privacy in Peer Review: A Theoretical Study [ArXiv]
    Wenxin Ding (University of Chicago); Gautam Kamath (University of Waterloo); Weina Wang (CMU); Nihar Shah (CMU)
  • APRIL: Finding the Achilles’ Heel on Privacy for Vision Transformers [ArXiv]
    Jiahao Lu (Institute of Automation, Chinese Academy of Sciences); Xi Sheryl Zhang (Institute of Automation, Chinese Academy of Sciences); Tianli Zhao (Institute of Automation,Chinese Academy of Sciences); Xiangyu He (Institute of Automation, Chinese Academy of Sciences); Jian Cheng (Chinese Academy of Sciences)
Poster Presentations
  • Exploring the Unfairness of DP-SGD Across Settings [ArXiv]
    Frederik A. Noe (The University of Copenhagen); Rasmus O.R. Herskind (University of Copenhagen); Anders Søgaard (University of Copenhagen)
  • Differentially Private Fractional Moments Estimation with Polylogarithmic Space [ArXiv]
    Lun Wang (University of California, Berkeley); Iosif Pinelis (Michigan Technological University); Dawn Song (UC Berkeley)
  • Secure Federated Feature Selection [ArXiv]
    Lun Wang (University of California, Berkeley); Qi Pang (HKUST); Shuai Wang (HKUST); Dawn Song (UC Berkeley)
  • Distributed Machine Learning and the Semblance of Trust [ArXiv]
    Dmitrii Usynin (Imperial College London); Alexander Ziller (Technische Universität München); Daniel Rueckert (Imperial College London); Jonathan Passerat-Palmbach (Imperial College London / ConsenSys Health); Georgios Kaissis (Technische Universität München)
  • SDNist: Benchmark Data and Evaluation Tools for Data Synthesizers
    Grégoire Lothe (Sarus Technology); Christine Task (Knexus Research); Issac Slavitt (DrivenData, Inc.); Nicolas Grislain (Sarus); Karan Bhagat (Knexus Research); Gary Howarth (National Institue of Standards and Technology)
  • On privacy and confidentiality of communications in organizational graphs [ArXiv]
    Masoumeh Shafieinejad (University of Waterloo); Robert Sim (Microsoft Research); Huseyin Inan (Microsoft Research ); Marcello Hasegawa (Microsoft)
  • Privacy Preserving Visual Question Answering [ArXiv]
    Cristian-Paul BARA (University of Michigan); Qing Ping (Amazon); Abhinav Mathur (Amazon); Rohith MV (Amazon Lab126); Govind Thattai (Amazon Alexa AI); Gaurav S Sukhatme (University of Southern California; Amazon)
  • LTU Attacker for Membership Inference [ArXiv]
    Joseph Pedersen (Rensselaer Polytechnic Institute); Rafael Muñoz-Gómez (Université Paris-Saclay); Jiangnan Huang (Université Paris-Saclay); Haozhe Sun (Paris-Saclay University); Isabelle Guyon (CNRS, INRIA, University Paris-Saclay and ChaLearn); Wei-Wei Tu (4Paradigm Inc.)
  • Feature Space Hijacking Attacks against Differentially Private Split Learning [ArXiv]
    Grzegorz Gawron (Liveramp); Philip Stubbings (LiveRamp)
  • BEAS: Blockchain Enabled Asynchronous & Secure Federated Machine Learning [ArXiv]
    Arup Mondal (Ashoka University); Harpreet Virk (Ashoka University); Debayan Gupta (Ashoka University)
  • SCOTCH: An Efficient Secure Computation Framework for Secure Aggregation [ArXiv]
    Arup Mondal (Ashoka University); Yash More (Ashoka University); Prashanthi Ramachandran (Brown University); Priyam Panda (Ashoka University); Harpreet Virk (Ashoka University); Debayan Gupta (Ashoka University)
  • Split HE: Fast Secure Inference Combining Split Learning and Homorphic Encryption
    George Pereteanu (Imperial); Amir Alansary (Imperial College London); Jonathan Passerat-Palmbach (Imperial College London / ConsenSys Health)
  • PrivFair: a Library for Privacy-Preserving Fairness Auditing [ArXiv]
    Sikha Pentyla (Mila - Quebec AI Institute); David Melanson (University of Washington, Tacoma); Martine De Cock (University of Washington Tacoma); Golnoosh Farnadi (Mila, Université de Montréal)
  • Anonymizing Trajectory Data: Limitations and Opportunities
    Patricia Guerra-Balboa (Karlsruhe Institute of Technology); Àlex Miranda Pascual (Universitat Politècnica de Catalunya); Javier Parra-Arnau (Karlsruhe Institute of Technology); Jordi Forné (Universitat Politècnica de Catalunya); Thorsten Strufe (Karlsruhe Institute of Technology)
  • DP-SGD vs PATE: Which Has Less Disparate Impact on GANs? [ArXiv]
    Georgi Ganev (UCL)
  • Learning Privacy-Preserving Deep Kernels with Known Demographics
    Namrata Deka (University of British Columbia); Danica J. Sutherland (University of British Columbia)

Tutorial

Differentially Private Deep Learning, Theory, Attacks, and PyTorch Implementation

by Ilya Mironov (Responsible AI, Meta), Alexandre Sablayrolles (Responsible AI, Meta), and Igor Shilov (Responsible AI, Meta)

Abstract:
The tutorial is designed as a gentle introduction to the topic of differentially private deep learning. In the first part of the tutorial we cover relevant topics in the theory of differential privacy, such as composition theorems and privacy-preserving mechanisms. In the second part, we discuss Opacus, a PyTorch-based library for performant and user-friendly differentially private training, and Privacy Linter, a library for identifying privacy violations. We begin by motivating privacy-preserving deep learning with several examples where industry-grade models demonstrably leak training data. We briefly review the notion of differential privacy (DP) as a remediation strategy, and learn how SGD-based optimization algorithms can be adapted to DP. We take a close look at several privacy accountants (upper bounds on privacy loss) and the complementary lower bounds. In the second part of the tutorial we present our Pytorch framework for DP-SGD, Opacus. Opacus is designed to be a simple and extensible framework that can easily be plugged into an existing machine learning pipeline. Opacus’ features include implementations of several privacy accountants, vectorized computations of per-sample gradients, and support for distributed computations. We conclude by presenting the Privacy Linter, a PyTorch framework for evaluating practical privacy attacks on machine learning models.The Linter implements a number of recently proposed attacks on trained models and support for more advanced strategies.

Invited Talks

When is Memorization Necessary for Machine Learning?

by Adam Smith (Boston University)

Abstract:
Modern machine learning models are complex, and frequently encode surprising amounts of information about individual inputs. In extreme cases, complex models appear to memorize entire input examples, including seemingly irrelevant information (exact addresses from text, for example). In this talk, we aim to understand whether this sort of memorization is necessary for accurate learning, and what the implications are for privacy. We describe two results that explore different aspects of this phenomenon. In the first, published at STOC 2021, we give natural prediction problems in which every sufficiently accurate training algorithm must encode, in the prediction model, essentially all the information about a large subset of its training examples. This remains true even when the examples are high-dimensional and have entropy much higher than the sample size, and even when most of that information is ultimately irrelevant to the task at hand. Further, our results do not depend on the training algorithm or the class of models used for learning. Our second, unpublished result shows how memorization must occur during the training process, even when the final model is succinct and depends only on the underlying distribution. This leads to new lower bounds on the memory size of one-pass streaming algorithms for fitting natural models. Joint work with (subsets of) Gavin Brown, Mark Bun, Vitaly Feldman, and Kunal Talwar.

Personal Privacy and the Public Good

by Claire McKay Bowen (Urban Institute)

Abstract:
Both privacy experts and policymakers are at an impasse, trying to answer, “At what point does the sacrifice to our personal information outweigh the public good?” If public policymakers had access to society’s personal and confidential data, they could make more evidence-based, data-informed decisions that could accelerate economic recovery and improve COVID-19 vaccine distribution. Although privacy researchers strive to balance the need for data privacy and accuracy, access to personal data comes at a steep privacy cost for contributors, especially underrepresented groups. This situation results in many federal statistical agencies in the United States to never produce public data or restrict the data to a select few external researchers. Further, most public data users and policymakers are not familiar with data privacy and confidentiality methods and must make informed policy decisions on how to best balance the need for confidential data access and privacy protection. This talk will cover several projects conducted at the intersection of expanding access to confidential data and public policy, the lessons learned when working with data users and non-privacy researchers, and the inequity of data privacy and confidentiality methodologies for underrepresented groups.

A bottom-up approach to making differential privacy ubiquitous

by Damien Desfontaines (Tumult Labs)

Abstract:
Among computer science researchers, differential privacy has been the gold standard for anonymization for over a decade. The real world is starting to catch up, albeit slowly. There is a need for strong anonymization techniques in small and large organizations alike… but it seems like only large organizations end up deploying DP for practical use cases, and only because they can afford to invest in specialized science and engineering teams to help them. How can we bridge this gap, and drive the widespread adoption of differential privacy? This talk will outline a bottom-up approach to bring differential privacy to a much more widespread audience. From initial outreach efforts all the way to production deployments, I will describe what a compelling solution could look like, and what role the scientific community can play in these efforts.

PPAI-22 Panel:

Differential Privacy and disparate impacts in downstream decisions and learning tasks

Panelists

Gerome Miklau

University of Massachusetts, Amherst and Tumult Labs

Claire McKay Bowen

Urban Institute

John M. Abowd

U.S. Census Bureau

Steven Wu

Carnegie Mellon University

Invited Speakers

Adam Smith

Boston University

Talk details

Claire McKay Bowen

Urban Institute

Talk details

Damien Desfontaines

Tumult Labs

Talk details

Ilya Mironov

Responsible AI (Meta)

Tutorial details

Program Committee

  • Amrita Roy Chowdhury (University of Wisconsin-Madison)
  • Aurélien Bellet (INRIA)
  • Carsten Baum (Aarhus University)
  • Catuscia Palamidessi (Laboratoire d'informatique de l'École polytechnique)
  • Christine Task (Knexus Research)
  • Cuong Tran (Syracuse University)
  • Di Wang (KAUST)
  • Elette Boyle (IDC H)
  • Fatemeh Mireshghallah (University of California, San Diego)
  • Graham Cormode (University of Warwick)
  • Hanieh Hashemi (University of Southern California)
  • Hao Wang (Rutgers University)
  • Ivan Habernal (Technical University of Darmstadt)
  • Jan Ramon (INRIA, FR)
  • Jianfeng Chi (University of Virginia)
  • Keyu Zhu (Georgia Tech)
  • Kobbi Nissim (Georgetown University)
  • Marco Romanelli (CentraleSupélec - CNRS - L2S )
  • Mark Bun (Boston University)
  • Michael Hay (Colgate University)
  • Mohamed Ali Kaafar (Macquarie University and CSIRO-Data61)
  • Mohammad Mahdi Khalili (University of Delaware)
  • Olga Ohrimenko (The University of Melbourne)
  • Paritosh P Ramanan (Georgia Institute of Technology)
  • Rakshit Naidu (Carnegie Mellon University)
  • Ranya Aloufi (Imperial College London)
  • Raouf Kerkouche (CISPA – Helmholtz Center for Information Security)
  • Santiago Zanella-Beguelin (Microsoft Research)
  • Terrence WK Mak (Georgia Institute of Technology)
  • Vikrant Singhal (Northeastern University )
  • Xi He (University of Waterloo)
  • Yunwen Lei (University of Birmingham)
  • Zhiqi Bu (University of Pennsylvania)

Workshop Chairs

Ferdinando Fioretto

Syracuse University

ffiorett@syr.edu

Aleksandra Korolova

University of Southern California

korolova@usc.edu

Pascal Van Hentenryck

Georgia Institute of Technology

pvh@isye.gatech.edu