Fortnightly on Wednesdays, 14:30
Speaker: Fahad Panolan ( Slides, Video)
Title: Parameterized Approximation Scheme for Independent Set of Rectangles
Abstract: The area of parameterized approximation seeks to combine approximation and parameterized algorithms to obtain, e.g., $(1 + \varepsilon)$-approximations in $f(k, \varepsilon)\mathsf{poly}(n)$ time where $k$ is some parameter of the input and $n$ is the input length. If we have an algorithm with such a running time for any $\varepsilon>0$, then it is called a parameterized approximation scheme. In this talk we discuss a parameterized approximation scheme for the maximum independent set of rectangles problem.
In the maximum independent set of rectangles problem (MISR) the input is a collection of $n$ axis parallel rectangles in the plane. Our goal is to select a maximum-cardinality subset of pairwise non-overlapping rectangles. This problem is NP-hard and also W[1]-hard [Marx, ESA’05]. The best-known polynomial-time approximation factor is $O(\log \log n)$ [Chalermsook and Chuzhoy, SODA’09] and it admits a QPTAS [Adamaszek and Wiese, FOCS’13; Chuzhoy and Ene, FOCS’16].
The talk is based on the paper “Parameterized Approximation Schemes for Independent Set of Rectangles and Geometric Knapsack”, by Fabrizio Grandoni, Stefan Kratsch, Andreas Wiese that appeared in ESA 2019. The paper is available here.
Speaker: N. R. Aravind ( Slides, Video)
Title: Algorithms for 3-SAT
Abstract: The 3-SAT problem is to decide whether a given CNF formula with clauses of size three has a satisfying assignment. This is one of the fundamental NP-hard problems and hence not expected to have polynomial-time algorithms.
In this talk, we will look at various known algorithms for this, with a focus on the algorithm of Paturi, Pudlak, Saks and Zane (2005), which has the current best running time of $O(1.308^n)$, where $n$ is the number of variables.
The PPSZ paper can be found here.
Speaker: Karteek Sreenivasaiah ( Slides, Video)
Title: Fooling machines that have limited computational power
Abstract: We often witness events that look completely random to us. A classic example of a random event is that of a coin toss. However, observe that if we could compute all the parameters of the coin toss (force, point of impact, angle etc.), then the outcome can be computed deterministically.
If we had enough computational power to correctly compute the outcome of a coin toss before it lands, every time, then coin tosses would not look random anymore! Could it be that a lack of computational power makes events look completely random even when they are not?
In this talk, we will look at a classic paper by Noam Nisan (1992) that shows formally that if a randomized algorithm is computationally restricted, then it can be fooled into believing that a not-so-random stream of bits is completely random (as if from the uniform distribution). More formally, a randomized algorithm that needs $R$ random bits and space $O(s)$ can be simulated with only $O(s \log R)$ truly random bits.
Nisan’s paper can be found here.
Speaker: Rogers Mathew
Title: Improved bounds for the sunflower lemma ( Slides, Video)
Abstract: An ‘$r$-sunflower’ is a collection of $r$ sets so that the intersection of each pair is equal to the intersection of all of them. Let $F$ be a family of $w$-sized sets. Erdos and Rado (1960) proved the sunflower lemma: if $|F| > w!(r-1)^w$, then $F$ contains an $r$-sunflower. The famous sunflower conjecture states, for a fixed $r$, the abovementioned bound for $|F|$ can be improved to $c^w$, for some constant $c = c(r)$. This talk is based on a recent work by Alweiss, Lovett, Wu and Zhang (STOC 2020) which makes some significant progress on the conjecture.
Speaker: Maria Francis
Title: Ideal lattices in Ring-LWE Cryptography ( Slides, Video)
Abstract: Lattice-based cryptography is one of the serious contenders for post quantum cryptography. It has many other attractive features that include simple and efficient algorithms, security guarantees based on worst-case hardness and construction of versatile tools including a candidate for Fully Homomorphic Encryption (FHE), the “holy grail” problem in cryptography.
The idea in lattice-based cryptography is to use conjectured hard problems on point lattices as the foundation for secure cryptographic systems. Even though the first lattice based cryptographic cryptosystem NTRU was introduced in 1998, major developments in this area happened around 2005 with Regev introducing the Learning with Errors (LWE) problem and the corresponding public key encryption system. The ring based analogue, Ring Learning with Errors problem (Ring-LWE) proposed by Lyubashevsky, Peikert and Regev in 2010, is important due to its increased efficiency and application to constructing FHE schemes. The security of ring-LWE problems relies on the hardness of certain standard problems over an algebraic structure called ideal lattices.
In this talk we will see an overview of the mathematical foundations of lattices, the LWE problem and how univariate ideal lattices have been used in ring-LWE.
Speaker: Subrahmanyam Kalyanasundaram
Title: The Sensitivity Conjecture and its Resolution ( Slides, Video)
Abstract:
The sensitivity and block sensitivity are two of the many complexity measures that give an idea of how “complex” a given boolean function is. In 1992, Nisan and Szegedy conjectured that the sensitivity and block sensitivity are polynomially related. Though very simple to state, this conjecture remained open till last year. It was resolved by Huang using a short, simple and elegant proof.
The proof uses an equivalence result by Gotsman and Linial that states that it is sufficient to lower bound the maximum degree of a $2^{n-1} + 1$ induced subgraph of the $n$-dimensional boolean hypercube. Huang shows a tight bound of $\sqrt n$ for this quantity, thus proving the sensitivity conjecture.
In the talk, I will briefly survey sensitivity and related complexity measures before explaining the proof. I shall also try to explain some of the associated results on which the proof is built on. The talk will be largely self-contained and assume only basic graph theory and linear algebra.
Huang’s paper is available here and was published in Annals on Mathematics in Nov 2019.
Speaker: Rakesh Venkat
Title: Lower Bounds on Linear Programming formulations for the Travelling Salesman Problem ( Slides)
Abstract: Linear Programming (LP) based techniques are a central tool in the design of approximation algorithms for NP-Hard problems. The first step in such approaches involves writing a set of linear inequalities (using some variables) to capture (i.e. encode) the solutions to the problem instance within the polytope defined by the feasible region. The ideal goal of such a formulation is to get this polytope to exactly be the convex hull of the encodings of the solutions to the problem.
One might expect, assuming P is not equal to NP, that we can’t achieve this goal using a polynomial-sized LP for NP-hard problems, because optimization over polytopes is possible in time polynomial in the size of the LP.
However, proving such a result is not easy: how does one go about showing that no LP, however cleverly formulated, can do this? Starting with the work of Yannakakis in 1991, this question has seen much progress in recent years, and has revealed connections to a number of other domains including communication complexity, information theory, Fourier analysis and quantum computation along the way.
We will first see a general overview of the results in this area, starting from the basics. I will then present a beautiful (and short) result due to Fiorini et al. that shows that capturing the TSP problem exactly requires exponential-sized LPs.
Main result of the talk is based on the paper: Linear vs. semidefinite extended formulations: exponential separation and strong lower bounds, by Fiorini, Massar, Pokutta, Tiwary and Wolf (winner of STOC ‘12 Best paper award).