SoS+TCS Reading Group
The SoS+TCS reading group is a once-monthly virtual reading group on recent papers in theoretical computer science, focusing primarily on the power and limitations of hierarchies of convex programs (like the Sum of Squares hierarchy) in algorithm design.
Format: We meet for 3 hour whiteboard/chalk/tablet talks, to allow lots of time for interaction and discussion with the speaker. Sessions will have 1-2 coffee breaks. Everyone is welcome, and no question is too simple. Our format is inspired by the old MIT/MSR Reading Group. Contact the organizers to get on the mailing list for Zoom links.
3 hours is a long time. It is not rude to leave at the first break, or even before.
Date and Time: By default, the 3rd Wednesday of every month, from 9am-12pm Pacific time. Schedule may vary from month to month.
Organizers: Tselil Schramm and Sam Hopkins
Schedule, Winter/Spring 2021
January 27 26 (note this is the 4th Wednesday Tuesday), 9am-12pm Pacific: Mitali Bafna (Harvard) and Max Hopkins (UCSD) on SoS algorithms for Unique Games over certifiable small-set expanders and Johnson schemes. NOTE DATE CHANGE.
Abstract (click to expand)
The Unique Games Conjecture (UGC) is a central open question in hardness of approximation which posits that it is NP-hard to distinguish between nearly satisfiable and highly unsatisfiable instances of a certain 2-variable constraint satisfaction problem called Unique Games (UG).
In the first half of this talk, we cover a new strategy for playing (affine) UG based on rounding "low-entropy" solutions --- measured via a novel global potential function --- to sum-of-squares (SoS) semidefinite programs. As a result, we obtain efficient algorithms for UG whenever low-degree sum-of-squares proofs certify good bounds on the small-set-expansion of the underlying constraint graph, notably including well-studied algebraic graphs such as the noisy hypercube and short code. The best-known algorithms for these important classes prior to this work took quasi-polynomial and nearly-exponential time respectively [ABS10].
In the second half of the talk, we cover a local-to-global extension of this strategy for constraint graphs whose small, non-expanding sets are SoS-certifiably explained by non-expanding local structure. We focus mainly on the case of the Johnson Scheme, and in particular on how viewing these graphs as random walks on an underlying high-dimensional expander facilitates the understanding of such local-to-global structure. Combined with the UG strategy for certifiable small-set expanders, we discuss how these structure theorems provide an efficient algorithm for UG over the Johnson Scheme whose soundness and runtime depend on the number of unique large eigenvalues, providing a substantial improvement over prior techniques reliant on the total number of large eigenvalues [ABS10].
Time permitting, we end by discussing connections with the similar characterization of non-expansion for the Grassmann scheme (the q-analog of the Johnson scheme) used to resolve the 2-2 Games Conjecture [KMS18], raising a fascinating interplay between hardness of and algorithms for unique games.
Based on joint works with Barak, Kaufman, Kothari, Lovett, Schramm, and Steurer: paper 1, paper 2
February 17, 9am-12pm Pacific: Madhur Tulsiani (TTIC) on SoS algorithms and lower bounds for constraint satisfaction problems over high-dimensional expanders.
Abstract (click to expand)
This talk will discuss some ways in which notions of expansion in hypergraphs (high-dimensional expansion) can be used to prove algorithmic as well as hardness results for the Sum-of-Squares SDP hierarchy. We plan to discuss two recent results related to Constraint Satisfaction Problems (CSPs) and the SoS hierarchy:
- CSP instances on high-dimensional expanders are easy to approximate using $O(1)$ levels of the SoS hierarchy
- Explicit CSP instances on high-dimensional expanders that are hard for $\Omega(\sqrt{\log n})$ levels of the SoS hierarchy.
We will discuss why both the above results are simultaneously true, using different notions of instances "on" high-dimensional expanders, and the expansion properties relevant for both results. Time permitting, we will also discuss some new structural results for expanding hypergraphs, which yield faster algorithms, bypassing the SoS hierarchy.
Based on joint works with Alev, Dinur, Filmus, Harsha, Jeronimo, Quintana, and Srivastava.
March 17 18 (note this is the 4th Wednesday Thursday), 9am-12pm Pacific: Daniel Kane (UC San Diego) on Small Covers for Near-Zero Sets of Polynomials and Learning Latent Variable Models. slides 1 slides 2
Abstract (click to expand)
We present a new technique for learning latent variable models with many parameters. The basic idea is to use the method of moments to find a large vector space of polynomials which nearly vanish on all of the parameters of the problem and to use this along with a new structural result to find a small cover of the set of relevant parameters after which exhaustive algorithms can be applied. Using this technique we develop substantially improved algorithms for learning mixtures of spherical Gaussians, mixtures of linear regressions and non-negative linear combinations of ReLUs.
April 21 23, 9am-12pm Pacific: Fred Koehler (MIT) on Approximating Partition Functions of Ising Models
Abstract (click to expand)
We discuss some approaches to the following classical algorithmic problem: approximating the normalizing constant (partition function) in Ising models. These are distributions analogous to a Gaussian measure on the hypercube, originally introduced in statistical physics and studied/used in many fields. First, we describe an algorithm using convex programming hierarchies to estimate the normalizing constant, and explain its connection to a popular variational method called the naive mean-field approximation. As an application, we answer a question of Allen, O'Donnell, and Zhou on the power of correlation rounding. Second, we discuss algorithmic guarantees for sampling and approximating the partition function in some frustrated systems, where the naive mean-field approximation fails. Both parts of the talk are based on mixture decomposition. Based on joint work with Vishesh Jain, Andrej Risteski, Elchanan Mossel, Ronen Eldan, and Ofer Zeitouni; I will also mention some related works.
May 19, 9am-12pm Pacific:
June 16, 9am-12pm Pacific: Morris Yau (UC Berkeley) on Online and Distribution-Free Robustness.