TCC 2020

Nov 16-19 2020

Virtual

Matches Made in Heaven: Cryptography and Theoretical Computer Science
Workshop

The purpose of this workshop is to highlight the strong connections between Cryptography and Theoretical Computer Science. It will feature recent TCS developments that have relevance to Cryptography, recent developments in Cryptography that have relevance to the broader TCS community, and other topics that are of common interest to both communities.

This workshop has been organized by Yuval Ishai, Kasper Green Larsen, Moni Naor, Alessandra Scafuro.

The times and links to these talks may be found in the TCC 2020 conference program. Registration is free but required to attend. The sessions will also be streamed live on the IACR YouTube channel

Monday November 16th

Avi Widgerson

Avi Widgerson
Title

The Impact of Cryptographic Thinking on TCS and Beyond


Tuesday November 17th

Ryan Williams

Ryan Williams
Title

Thinking Algorithmically About Impossibility

Abstract

I will give a high-level overview of how one can prove lower bounds in circuit complexity, by the design of appropriate algorithms that reason about circuits. As we go along, I will mention various relationships with cryptography.

Aayush Jain

Aayush Jain
Title

Indistinguishability Obfuscation from Well-Founded Assumptions

Abstract

We present a construction of an indistinguishability obfuscation scheme, whose security rests on the subexponential hardness of four well-founded assumptions. We show the existence of an indistinguishability obfuscation scheme for all circuits assuming sub-exponential security of the following assumptions:

  • The Learning with Errors (LWE) assumption with arbitrarily small subexponential modulus-to-noise ratio,
  • The SXDH assumption with respect to bilinear groups of prime order p,
  • Existence of a Boolean Pseudorandom Generator (PRG) in $\mathsf{NC}^0$ with arbitrary polynomial stretch, that is, mapping n bits to $n^{1+\tau}$ bits, for any constant $\tau>0$.
  • The Learning Parity with Noise (LPN) assumption over $\mathbb{Z}_p$ with error-rate $\ell^{-\delta}$, where $\ell$ is the dimension of the secret and $\delta>0$ is an arbitrarily small constant.

Further, assuming only polynomial security of these assumptions, there exists a compact public-key functional encryption scheme for all circuits.

The main technical novelty is the introduction and construction of a cryptographic pseudorandom generator that we call a Structured-Seed PRG (sPRG), assuming LPN over $\mathbb{Z}_p$ and PRGs in $\mathsf{NC}^0$.

During the talk, I will discuss how structured seed PRGs have evolved from different notions of novel pseudorandom generators proposed in the past few years, and how an interplay between different areas of theoretical computer science played a major role in providing valuable insights leading to this work. Time permitting, I will go into the details of how to construct sPRGs.

Joint work with Huijia (Rachel) Lin and Amit Sahai.


Wednesday November 18th

David Woodruff

David Woodruff
Title

Adversarially Robust Streaming Algorithms

Abstract

A streaming algorithm is given a long sequence of items and seeks to compute or approximate some function of this sequence using a small amount of memory. A body of work has been developed over the last two decades, resulting in optimal streaming algorithms for a number of problems. I will start by surveying some of these problems. I will then investigate the adversarial robustness of streaming algorithms. An algorithm is considered robust if its performance guarantees hold even if the stream is chosen adaptively by an adversary that observes the outputs of the algorithm along the stream and can react in an online manner. While deterministic streaming algorithms are inherently robust, many central problems do not admit sublinear-space deterministic algorithms; on the other hand, space-efficient randomized algorithms for these problems are generally not adversarially robust. This raises the question of whether there exist efficient adversarially robust (randomized) streaming algorithms for these problems. I will discuss recent work showing for a number of streaming problems, one can obtain algorithms that are adversarially robust with a small overhead in their memory requirements. I will then discuss followup work achieving almost no overhead for a large number of problems.

Based on joint works with Omri Ben-Eliezer, Rajesh Jayaram, Eylon Yogev, and with Samson Zhou.

Giulio Malavolta

Giulio Malavolta
Title

Obfuscation from Homomorphic Encryption Schemes

Abstract

We discuss a new approach to construct general-purpose indistinguishability obfuscation (iO). We construct iO from an interplay of homomorphic encryption schemes that satisfy a series of structural properties: Crucially, one should be able to produce a short decryption hint that allows anyone to recover the entire plaintext of a given ciphertext, without affecting the semantic security of the scheme.

We show a generic candidate construction of iO based on three building blocks: (i) A standard FHE scheme with linear decrypt-and-multiply, (ii) a linearly homomorphic encryption scheme with short decryption hints, and (iii) a cryptographic hash function. We show evidence that this construction is secure by providing an argument in an appropriately defined oracle model.

We view our construction as a big departure from the state-of-the-art constructions, and it is in fact quite simple.

Daniel Wichs

Daniel Wichs
Title

Candidate Obfuscation via Oblivious LWE Sampling

Abstract

We present a new, simple candidate construction of indistinguishability obfuscation (iO). Our scheme is inspired by lattices and learning-with-errors (LWE) techniques, but we are unable to prove security under a standard assumption. Instead, we formulate a new falsifiable assumption under which the scheme is secure. Furthermore, the scheme plausibly achieves post-quantum security.

Our construction is based on the recent "split FHE" framework of Brakerski, Döttling, Garg, and Malavolta (EUROCRYPT '20), and we provide a new instantiation of this framework. As a first step, we construct an iO scheme that is provably secure assuming that LWE holds and that it is possible to obliviously generate LWE samples without knowing the corresponding secrets. We define a precise notion of oblivious LWE sampling that suffices for the construction. It is known how to obliviously sample from any distribution (in a very strong sense) using iO, and our result provides a converse, showing that the ability to obliviously sample from the specific LWE distribution (in a much weaker sense) already also implies iO. As a second step, we give a heuristic contraction of oblivious LWE sampling. On a very high level, we do this by homomorphically generating pseudorandom LWE samples using an encrypted pseudorandom function.

Joint work with Hoeteck Wee.

Romain Gay

Romain Gay
Title

Indistinguishability Obfuscation from Circular Security

Abstract

We show the existence of indistinguishability obfuscators (iO) for general circuits assuming subexponential security of:

  1. the Learning with Error (LWE) assumption (with subexponential modulus-to-noise ratio);
  2. a circular security conjecture regarding the Gentry-Sahai-Water’s (GSW) encryption scheme.

The circular security conjecture states that a notion of leakage-resilient security (that we prove is satisfied by GSW assuming LWE) is retained in the presence of an encryption of the secret key. Our work thus places iO on qualitatively similar assumptions as unlevelled FHE, for which known constructions also rely on a circular security conjecture.


Thursday November 19th

Katrina Ligett

Katrina Ligett
Title

Privacy as Stability, for Generalization

Abstract

Many data analysis pipelines are adaptive: the choice of which analysis to run next depends on the outcome of previous analyses. Common examples include variable selection for regression problems and hyper-parameter optimization in large-scale machine learning problems: in both cases, common practice involves repeatedly evaluating a series of models on the same dataset. Unfortunately, this kind of adaptive re-use of data invalidates many traditional methods of avoiding overfitting and false discovery, and has been blamed in part for the recent flood of non-reproducible findings in the empirical sciences. An exciting line of work beginning with Dwork et al. 2015 establishes the first formal model and first algorithmic results providing a general approach to mitigating the harms of adaptivity, via a connection to the notion of differential privacy.

In this talk, we'll explore the notion of differential privacy and gain some understanding of how and why it provides protection against adaptivity-driven overfitting. Many interesting questions in this space remain open.

Joint work with: Christopher Jung, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi (UPenn), and Moshe Shenfeld (HUJI). This work appeared at ITCS 2020.