Please find below a list of papers accepted to STOC 09, including
their authors and abstracts. (The abstract there is the abstract the
authors put in -- sorry many are fairly unreadable.)
This list is preliminary and subject to change.
Small-size $\\eps$-Nets for Axis-Parallel Rectangles and Boxes
Boris Aronov and Esther Ezra and Micha Sharir
We show the existence of $\\eps$-nets of size
\n$O\\left(\\frac{1}{\\eps}\\log\\log\\frac{1}{\\eps}\\right)$ for
planar point sets and axis-parallel rectangular ranges. The same bound
holds for points in the plane with ``fat'' triangular ranges, and for
point sets in 3-space and axis-parallel boxes; these are the first
known non-trivial bounds for these range spaces. Our technique also
yields improved bounds on the size of $\\eps$-nets in the more general
context considered by Clarkson and Varadarajan. For example, we show
the existence of $\\eps$-nets of size
\n$O\\left(\\frac{1}{\\eps}\\log\\log\\log\\frac{1}{\\eps}\\right)$
for the ``dual'' range space of ``fat'' regions and planar point
sets. Plugging our bounds into the technique of Br\\"onnimann and
Goodrich, we obtain improved approximation factors (computable in
randomized polynomial time) for the \\textsc{hitting set} or the
\\textsc{set cover} problems associated with the corresponding (primal
or dual) range spaces.
Non-monotone Submodular Maximization under Matroid and Knapsack Constraints
Jon Lee and Vahab S. Mirrokni and Viswanath Nagarajan and Maxim Sviridenko
Submodular function maximization is a central problem in combinatorial
optimization, generalizing many important problems including Max Cut
in directed/undirected graphs and in hypergraphs, certain constraint
satisfaction problems, maximum entropy sampling, and maximum facility
location problems. Unlike submodular minimization, submodular
maximization is NP-hard. In this paper, we give the first
constant-factor approximation algorithms for maximizing any
non-negative submodular function subject to multiple matroid
or\nknapsack constraints. We emphasize that our results are for
non-monotone submodular functions. In particular, for any constant
$k$, we present a $\\left({1\\over k+2+{1\\over
k}+\\epsilon}\\right)$-approximation for submodular maximization under
$k$ matroid constraints, and a $\\left({1\\over
5}-\\epsilon\\right)$-approximation algorithm for this problem subject
to $k$ knapsack constraints ($\\epsilon>0$ is any constant). We
improve the approximation guarantee of our algorithm to ${1\\over
k+1+{1\\over k-1}+\\epsilon}$ for $k\\ge 2$ partition matroid
constraints. This idea also gives a $\\left({1\\over
k+\\epsilon}\\right)$-approximation for maximizing a monotone
submodular function subject to $k\\ge 2$ partition matroids, which
improves over the previously best known guarantee of
$\\frac{1}{k+1}$.
Private Coresets
Dan Feldman and Amos Fiat and Haim Kaplan and Kobbi Nissim
We forge a link between coresets, that have found use in a vast host
of geometric settings, and differentially private databases that can
answer any number of queries without compromising privacy --- queries
of the same class well approximated by the coreset. Such queries imply
efficient privacy preserving optimization algorithms for a wide range
of optimization problems but are also of independent interest. \n\nWe
give (computationally inefficient) transformations from any coreset
for \nqueries with low generalized sensitivity to differentially
private databases for the same queries. This greatly extends the work
of Blum, Ligett, and Roth \\cite{BLR08}, and McSherry and Talwar
\\cite{MT07}. \n\nWe define the notion of {\\sl private coresets},
which are simultaneously both coresets {\\sl and} differentially
private. We produce {\\sl computationally efficient} private coresets
for $k$-median and $k$-mean in $\\Re^d$, i.e., efficient
differentially private databases for such queries. Private coresets
also imply efficient coalition proof mechanisms and error resilient
data structures. \n\nUnlike coresets which only have a multiplicative
approximation factor, we prove that private coresets must have a
(small) additive error.
Green's Conjecture and Testing Linear-Invariant Properties
Asaf Shapira
Given a set of linear equations Mx=b, we say that a set of integers S
is (M,b)-free if it contains no solution to this system of
equations. Motivated by questions related to testing linear-invariant
properties of Boolean functions, as well as recent investigations in
additive number theory, the following conjecture was raised
(implicitly) by Green and by Bhattacharyya, Chen, Sudan and Xie: we
say that a set of integers S \\subseteq [n], is \\epsilon-far from
being (M,b)-free if one needs to remove at least \\epsilon n elements
from S in order to make it (M,b)-free. The conjecture was that for any
system of homogeneous linear equations Mx=0 and for any \\epsilon > 0
there is a *constant* time randomized algorithm that can distinguish
(with high probability) between sets of integers that are (M,0)-free
from sets that are \\epsilon-far from being (M,0)-free. Or in other
words, that for any M there is an efficient testing algorithm for the
property of being (M,0)-free. In this paper we confirm the above
conjecture by showing that such a testing algorithm exists even for
non-homogeneous linear equations. An interesting aspect of our
analysis is that as opposed to most results on testing Boolean
functions, which rely on algebraic and analytic arguments, our proof
relies on results from extremal hypergraph theory, such as the recent
removal lemmas of Gowers, R\\"odl et al. and Austin and Tao.
Distributed (Delta+1)-coloring in linear (in Delta) time
Leonid Barenboim and Michael Elkin
The distributed (Delta + 1)-coloring problem is one of most
fundamental and well-studied problems of Distributed
Algorithms. Starting with the work of Cole and Vishkin in 86, there
was a long line of gradually improving algorithms published. The
current state-of-the-art running time is O(Delta * log Delta + log^*
n), due to Kuhn and Wattenhofer, PODC'06. \nLinial (FOCS'87) has
proved a lower bound of (1/2) * log^* n for the problem, and Szegedy
and Vishwanathan (STOC'93) provided a heuristic argument that shows
that algorithms from a wide family of local iterative algorithms are
unlikely to achieve running time smaller than Theta(Delta * log
Delta).\n\nWe present a deterministic (Delta + 1)-coloring distributed
algorithm with running time O(Delta) + (1/2) * log^* n. We also
present a tradeoff between the running time and the number of colors,
and devise an O(Delta^{1+ epsilon})-coloring algorithm with running
time O(Delta^{1 - epsilon} + log^*n), for any constant epsilon, 0 <
epsilon < 1/4.\nOur algorithm breaks the heuristic barrier of Szegedy
and Vishwanathan, and achieves running time which is linear in the
maximum degree Delta. On the other hand, the conjecture of Szegedy and
Vishwanathan may still be true, as our algorithm is not from the
family of local iterative algorithms. \n\nOn the way to this result we
introduce a generalization of the notion of graph coloring, which we
call relaxed coloring. In an m-relaxed p-coloring the vertices are
colored with p colors so that each vertex has up to m neighbors with
the same color. We show that an m-relaxed p-coloring with reasonably
small m and p can be computed very efficiently. We also develop a
technique to employ multiple relaxed colorings of various subgraphs of
the original graph G for computing a (Delta+1)-coloring of G. We
believe that these techniques and the notion of relaxed coloring are
of independent interest.
Affine Dispersers from Subspace Polynomials
Eli Ben-Sasson, Swastik Kopparty
An affine disperser over $\\F_2^n$ for sources of dimension $d$ is a
function $f: \\F_2^n \\rightarrow \\F_2$ such that for any affine
space $S \\subseteq \\F_2^n$ of dimension at least $d$, we have
$\\{f(s) : s \\in S\\} = \\F_2$. Affine dispersers have been
considered in the context of deterministic extraction of randomness
from structured sources of imperfect randomness. Previously, explicit
constructions of affine dispersers were known for every $d =
\\Omega(n)$, due to Barak et. al.~\\cite{BKSSW} and
Bourgain~\\cite{Bourgain} (the latter in fact gives stronger objects
called affine extractors).\n\nIn this work we give the first explicit
affine dispersers for {\\em sublinear} dimension. Specifically, our
dispersers work even when $d = \\Omega(n^{4/5})$. The main novelty in
our construction lies in the method of proof, which relies on
elementary properties of {\\em subspace polynomials}. In contrast, the
previous works mentioned above relied on sum-product theorems for
finite fields.
A new Line of Attack on the Dichotomy Conjecture
Gabor Kun and Mario Szegedy
The well known dichotomy conjecture of Feder and Vardi states
that\nfor every family $\\family$ of constraints $\\csp(\\family)$ is
either\npolynomially solvable or NP-hard. Bulatov and Jeavons
reformulated\nthis conjecture in terms of the properties of the
algebra\n$Pol(\\family)$, where the latter is the collection of those
$m$-ary\noperations ($m= 1,2,\\ldots$) that keep all constraints in
$\\family$\ninvariant. We show that the algebraic condition boils down
to\nwhether there are arbitrarily resilient functions in
$Pol(\\family)$.\nEquivalently, we can express this in the terms of
the PCP theory:\n$\\csp(\\family)$ is NP-hard iff all long code tests
created from\n$\\family$ that passes with zero error admits\nonly
juntas\\footnotemark . Then,\nusing this characterization and a result
of Dinur, Friedgut and\nRegev, we give an entirely new and transparent
proof to the\nHell-Ne\\v{s}et\\v{r}il theorem, which states that for a
simple\nconnected undirected graph $H$, the problem $\\csp(H)$ is
NP-hard if\nand only if $H$ is non-bipartite.\n\nWe also introduce
another notion of resilience (we call it strong\nresilience), and we
use it in the investigation of $\\csp$ problems\nthat 'do not have the
ability to count.'\nThe complexity of this class is unknown.\nSeveral
authors conjectured that $\\csp$\nproblems without the ability to
count have bounded width,\nor equivalently, that they can
be\ncharacterized by existential $k$-pebble games.\nThe resolution of
this conjecture would be a major\nstep towards the resolution of the
dichotomy conjecture.\nWe show that $\\csp$ problems without the
ability to count are exactly\nthe ones with strongly resilient term
operations, which might give a handier\ntool to attack the conjecture
than the known algebraic characterizations.
Exact Learning of Random DNF Formulas Under the Uniform Distribution
Linda Sellie
We show that randomly generated c log(n)-DNF formula can be learned
exactly in proba- \nbilistic polynomial time using randomly generated
examples. Our notion of randomly generated \nis with respect to a
uniform distribution. To prove this we extend the concept of well
behaved \nc log(n)-Monotone DNF formulae to c log(n)-DNF, and show
that almost every DNF formula is \nwell-behaved, and that there exists
a probabilistic Turing machine that exactly learns all well \nbehaved
c log(n)-DNF formula. This is the ﬁrst algorithm that
learns a large class of DNF with \na polynomial number of terms from
random examples alone.
On Proximity Oblivious Testing
Oded Goldreich and Dana Ron
We initiate a systematic study of a special type of\nproperty
testers. These testers consist of repeating\na basic test for a number
of times that depends on\nthe proximity parameter, whereas the basic
test is\noblivious of the proximity parameter. We refer to such\nbasic
tests by the term Proximity-Oblivious Testers.\n\nWhile
proximity-oblivious testers were studied before -\nmost notably in the
algebraic setting - the current\nstudy seems to be the first one to
focus on graph properties.\nWe provide a mix of positive and negative
results,\nand in particular characterizations of the graph properties
that have\nconstant-query proximity-oblivious testers in the two
standard models\n(i.e., the adjacency matrix and the bounded-degree
models).\nFurthermore, we show that constant-query proximity-oblivious
testers\ndo not exist for many easily testable properties,\nand that
even when proximity-oblivious testers exist,\nrepeating them does not
necessarily yield the best standard testers\nfor the corresponding
property.
Explicit construction of a small epsilon-net for linear threshold functions
Yuval Rabani and Amir Shpilka
We give explicit constructions of epsilon nets for linear
threshold\nfunctions on the binary cube and on the unit sphere. The
size of\nthe constructed nets is polynomial in the dimension $n$ and
in\n$\\frac 1 \\eps$. To the best of our knowledge no such
constructions\nwere previously known. Our results match, up to the
exponent of\nthe polynomial, the bounds that are achieved by
probabilistic arguments.\n\nAs a corollary we also construct subsets
of the binary cube that\nhave size polynomial in $n$ and covering
radius of\n$\\frac{n}{2} - c\\sqrt{n\\log n}$, for any constant
$c$. This improves\nupon the well known construction of dual BCH codes
that only\nguarantee covering radius of $\\frac{n}{2} - c\\sqrt{n}$.
Quantum Algorithms using the Curvelet Transform
Yi-Kai Liu
The curvelet transform is a directional wavelet transform over R^n,
originally due to Candes and Donoho (2002). It is used to analyze
functions that have singularities along smooth surfaces. I demonstrate
how this can lead to new quantum algorithms. I give an efficient
implementation of a quantum curvelet transform, together with two
applications: a single-shot measurement procedure for approximately
finding the center of a ball in R^n, given a quantum-sample over the
ball; and, a quantum algorithm for finding the center of a radial
function over R^n, given oracle access to the function. I conjecture
that algorithm (1) succeeds with constant probability using 1
quantum-sample, and algorithm (2) succeeds with constant probability
using O(1) oracle queries. This holds independent of the dimension n
--- this can be interpreted as a quantum speed-up. Finally, I prove
some rigorous bounds on the distribution of probability mass for the
continuous curvelet transform. This almost proves my conjecture,
except for issues of discretization.
Multiple Intents Re-Ranking
Yossi Azar and Iftah Gamzu and Xiaoxin Yin
One of the most fundamental problems in web search is how to\nre-rank
result web pages based on user logs. Most traditional\nmodels for
re-ranking assume each query has a single intent. That\nis, they
assume all users formulating the same query have similar\npreferences
over the result web pages. It is clear that this is\nnot true for a
large portion of queries as different users may\nhave different
preferences over the result web pages. Accordingly,\na more accurate
model should assume that queries have multiple\nintents.\n\nIn this
paper, we introduce the \\emph{multiple intents
re-ranking}\nproblem. This problem captures scenarios in which some
user makes\na query, and there is no information about its real search
intent.\nIn such case, one would like to re-rank the search results in
a\nway that minimizes the efforts of all users in finding
their\nrelevant web pages. More formally, the setting of this
problem\nconsists of various types of users, each of which interested
in\nsome subset of the search results. Furthermore, each user type
has\na non-negative profile vector. Consider some ordering of
the\nsearch results. This order determines a position for each
search\nresult, and induces a position vector of the results relevant
to\neach user type. The overhead of a user type is the dot product
of\nits profile vector and its induced position vector. The goal is
to\norder the search results as to minimize the average overhead
of\nthe users.\n\nOur main result is an $O(\\log r)$-approximation
algorithm for the\nproblem, where $r$ is the maximum number of search
results that\nare relevant to any user type. The algorithm is based on
a new\ntechnique, which we call harmonic interpolation. In addition,
we\nconsider two important special cases. The first case is when
the\nprofile vector of each user type is non-increasing. This case is
a\ngeneralization of the well-known \\emph{min-sum set cover}
problem.\nWe extend the techniques of Feige, Lov{\\'a}sz and
Tetali\n(Algorithmica '04), and present an algorithm
achieving\n$4$-approximation. The second case is when the profile
vector of\neach user type is non-decreasing. This case generalizes
the\n\\emph{minimum latency set cover} problem. We devise a
LP-based\nalgorithm that attains $2$-approximation. In particular,
this\nresult improves the $e$-approximation for minimum latency
set\ncover due to Hassin and Levin (ESA '05).
A constructive proof of the Lovasz Local Lemma
Robin A. Moser
The Lovasz Local Lemma is a powerful tool to prove the existence of
combinatorial objects meeting a prescribed collection of criteria. The
technique can directly be applied to the satisfiability problem,
yielding that a k-CNF formula in which each clause has common
variables with at most 2^(k−2)\nother clauses is always
satisfiable. All hitherto known proofs of the Local Lemma are
non-constructive and do thus not provide a recipe as to how a
satisfying assignment to such a formula can be efficiently found. In
his breakthrough paper, Beck demonstrated that if the neighbourhood of
each clause\nbe restricted to O(2^(k/48)), a polynomial time algorithm
for the search problem exists. Alon simplified and randomized his
procedure and improved the bound to O(2^(k/8)). Srinivasan presented a
variant that achieves a bound of essentially O(2^(k/4)). In an earlier
publication, we improved this to O(2^(k/2)). In the present paper, we
give a randomized algorithm that finds a satisfying assignment to
every k-CNF formula in which each clause has a neighbourhood of at
most the asymptotic optimum of 2^(k−5)−1 other
clauses and that runs in expected time polynomial in the size of the
formula, irrespective of k. If k is considered a constant, we can also
give a deterministic variant. In contrast to all previous approaches,
our analysis does not anymore invoke the standard non-constructive
versions of the Local Lemma and can therefore be considered an
alternative, constructive proof of it.
Holant Problems and Counting CSP
Jin-Yi Cai and Pinyan Lu and Mingji Xia
We propose and explore a novel alternative framework to study the
complexity of counting problems, called Holant Problems. Compared to
counting Constrained Satisfaction Problems (\#CSP), it is a
refinement with a more explicit role for the function constraints.
Both graph homomorphism and \#CSP can be viewed as special cases of
Holant Problems. We prove complexity dichotomy theorems in this
framework. Because the framework is more stringent, previous
dichotomy theorems for \#CSP problems no longer apply. Indeed, we
discover surprising tractable subclasses of counting problems, which
could not have been easily specified in the \#CSP framework. The
main technical tool we use and develop is holographic reductions.
Another technical tool used in combination with holographic
reductions is polynomial interpolations.
The study of Holant Problems led us to discover and prove
a complexity dichotomy theorem for the most general
form of Boolean \#CSP where every constraint function takes values
in the complex number field ${\mathbb C}$.
Every planar graph is the intersection graph of segments in the plane
Jeremie Chalopin and Daniel Goncalves
Given a set S of segments in the plane, the intersection\ngraph of S
is the graph with vertex set S in which two vertices\nare adjacent if
and only if the corresponding two segments intersect.\nWe prove a
conjecture of Scheinerman (PhD Thesis, Princeton\nUniversity, 1984)
that every planar graph is the intersection graph\nof some segments in
the plane.
3-Query Locally Decodable Codes of Subexponential Length
Klim Efremenko
Locally Decodable Codes (LDC) allow one to decode any
particular\nsymbol of the input message by making a constant number of
queries\nto a codeword, even if a constant fraction of the codeword
is\ndamaged. In recent work ~\\cite{Yekhanin08} Yekhanin constructs
a\n$3$-query LDC with sub-exponential length of
size\n$\\exp(\\exp(O(\\frac{\\log n}{\\log\\log n})))$. However,
this\nconstruction requires a conjecture that there are infinity
many\nMersenne primes. In this paper we give an unconditional
$3$-query\nLDC construction with shorter codeword length
of\n$\\exp(\\exp(O(\\sqrt{\\log n \\log \\log n })))$. We also give
a\n$2^r$-query LDC with length of $\\exp(\\exp(O(\\sqrt[r]{\\log n
\\log\n\\log^{r-1} n })))$. The main ingredient in our construction is
the\nexistence of super-polynomial size set-systems with
restricted\nintersections \\cite{Grolmusz00} which holds only over
composite\nnumbers.
Fault-Tolerant Spanners for General Graphs
S. Chechik and M. Langberg and D. Peleg and L. Roditty
The paper concerns graph spanners that are resistant to vertex or
edge\nfailures. Given a weighted undirected $n$-vertex graph $G=(V,E)$
and\nan integer $k \\geq 1$, the subgraph $H=(V,E')$, $E'\\subseteq
E$,\nis a {\\em spanner} of stretch $k$ (or, a $k$-spanner) of $G$
if\n$\\delta_H(u,v) \\leq k\\cdot \\delta_G(u,v)$ for every $u,v \\in
V$,\nwhere $\\delta_{G'}(u,v)$ denotes the distance between $u$ and
$v$ in $G'$.\nGraph spanners were extensively studied since their
introduction\nover two decades ago.\nIt is known how to efficiently
construct a $(2k-1)$-spanner of size\n$O(n^{1+1/k})$, and this
size-stretch tradeoff is conjectured to be tight.\n\nThe notion of
{\\em fault tolerant} spanners was introduced a decade ago\nin the
geometric setting [Levcopoulos et al., STOC'98].\nA subgraph $H$ is an
$f$-vertex fault tolerant $k$-spanner of the graph $G$\nif for any set
$F\\subseteq V$ of size at most $f$ and any pair of vertices\n$u,v
\\in V\\setminus F$, the distances in $H$
satisfy\n$\\delta_{H\\setminus F}(u,v) \\leq k\\cdot
\\delta_{G\\setminus F}(u,v)$.\nLevcopoulos et al. presented an
efficient algorithm\nthat constructs an $f$-vertex fault tolerant
\\emph{geometric}\n$(1+\\epsilon)$-spanner, that is, given a set $S$
of $n$ points in\n$\\mathbb{R}^d$, their algorithm finds a sparse
graph $H$ such that\nfor every set $F\\subseteq S$ of size $f$ and any
pair of points\n$u,v \\in S\\setminus F$ it satisfies
that\n$\\delta_{H\\setminus F}(u,v) \\leq (1+\\epsilon) |uv|$, where
$|uv|$\nis the Euclidean distance between $u$ and $v$.\nA fault
tolerant geometric spanner with optimal maximum degree\nand total
weight was presented in [Czumaj and Zhao, SoCG'03].\nThis paper also
raised as an open problem the question whether it is possible\nto
obtain a fault tolerant spanner for an arbitrary undirected weighted
graph.\n\nThe current paper answers this question in the affirmative,
presenting\nan $f$-vertex fault tolerant $(2k-1)$-spanner whose size
is\n$O(f^3 k^{f+1} \\cdot n^{1+1/k}\\log^{1-1/k}n)$.\nInterestingly,
the stretch of the spanner remains unchanged\nwhile the size of the
spanner only increases by a factor that depends\non the stretch $k$,
on the number of potential faults $f$,\nand on logarithmic terms in
$n$. In addition, we consider the simpler setting\nof $f$-{\\em edge}
fault tolerant spanners (defined analogously).\nWe present an $f$-edge
fault tolerant $2k-1$ spanner with edge set\nof size $O(f\\cdot
n^{1+1/k})$ (only $f$ times larger than standard spanners).\nFor both
edge and vertex faults, our results, are shown to hold\nwhen the given
graph $G$ is weighted.
Bit-Probe Lower Bounds for Succinct Data Structures
Emanuele Viola
We prove lower bounds on the redundancy necessary to\nrepresent a set
$S$ of objects using a number of bits\nclose to the
information-theoretic minimum $\\log_2 |S|$,\nwhile answering various
queries by probing few bits. Our\nmain results
are:\n\n\\begin{itemize}\n\\item We prove that, to represent $n$
ternary elements $t \\in\n\\zot^n$ in terms of binary elements
$\\zo^u$ while\naccessing a single element $t_i \\in \\zot$
by\nnon-adaptively probing $q$ bits, one needs\n$$u \\geq (\\log_2 3)n
+ n/2^{q^{O(1)}}.$$ This matches a\nrecent, exciting work by P{\\v
a}tra{\\c s}cu (FOCS 2008)\nwho gives a representation where $u \\leq
(\\log_2 3)n +\nn/2^{q^{\\Omega(1)}}$ (also non-adaptive).\n\n\\item
We also prove a similar lower bound for the dictionary\nproblem of
representing sets of size $n/3$ from a universe\nof $n$ elements in
terms of binary elements $\\zo^u$ while\nanswering membership queries
by non-adaptively probing $q$\nbits: one needs $$u \\geq \\log_2
\\binom{n}{n/3} +\nn/2^{q^{O(1)}} - \\log n.$$\n\\end{itemize}\n\nBoth
results above hold when replacing the constant\n``$3$'' with any other
constant which is not a power of\n$2$. They extend to randomized
representations, and imply\nweaker yet interesting bounds for adaptive
probes as well.\nOurs are the first lower bounds for these
fundamental\nproblems; we obtain them drawing on ideas used in
lower\nbounds for locally decodable codes.
A Nearly Optimal Oracle for Aoviding Failed Vertices and Edges
Aaron Bernstein and David Karger
We present an improved oracle for the distance sensitivity
problem. The goal is to preprocess a directed graph $G = (V,E)$ with
non-negative edge weights to answer queries of the form: what is the
length of the shortest path from x to y that does not go through some
{\\it failed} vertex or edge f. The state of the art algorithm
produces an oracle of size $\\widetilde{O}(n^{2})$ that has an $O(1)$
query time, and an $\\widetilde{O}(n^{2}\\sqrt{m})$ construction
time. It is a randomized Monte Carlo algorithm, so it works with high
probability. Our oracle also has a constant query time and an
$\\widetilde{O}(n^2)$ space requirement, but it has an improved
construction time of $\\widetilde{O}(mn)$, and it is
deterministic. Note that $O(1)$ query, $O(n^{2})$ space, and $O(mn)$
construction time is also the best known bound (up to logarithmic
factors) for the simpler problem of finding all pairs shortest paths
in a weighted, directed graph. Thus, barring an improvement in the all
pairs shortest path problem, our oracle is optimal up to logarithmic
factors.
An Efficient Algorithm for Partial Order Production
Jean Cardinal and Samuel Fiorini and Gwenael Joret and Raphael M. Jungers and J. Ian Munro
We consider the problem of {\\em partial order production}: arrange
the elements of an unknown totally ordered $T$ into a target partially
ordered set $S$, by comparing a minimum number of pairs in
$T$. Special cases of this problem include sorting by comparisons,
selection, multiple selection, and heap construction.\n\nWe give an
algorithm performing $ITLB + o(ITLB) + O(n)$ comparisons in the worst
case. Here, $n$ denotes the size of the ground sets, and $ITLB$
denotes a natural information-theoretic lower bound on the number of
comparisons needed to produce the target poset. The overall complexity
of our algorithm is polynomial. This answers a question of Yao
(SICOMP, 1989).\n\nOur strategy is to extend the poset $S$ to a weak
order $W$ whose corresponding information-theoretic lower bound is
provably not much larger than that for $S$. Taking $W$ instead of $S$
as a target poset, we then solve the problem by applying a multiple
selection algorithm that performs not much more than $ITLB$
comparisons.\n\nWe base our analysis on the entropy of the target
poset $S$, a quantity that can be efficiently computed and provides a
good estimate of $ITLB$ since the latter is, up to a linear error
term, $n$ times the entropy of $S$.
On the Complexity of Communication Complexity
Eyal Kushilevitz and Enav Weinreb
We consider the following question: given a two-argument
boolean\nfunction $f$, represented as an $N\\times N$ binary matrix,
how\nhard is to determine the (deterministic) communication
complexity\nof $f$?\n\nWe address two aspects of this question. On the
{\\em\ncomputational} side, we prove that, under
appropriate\ncryptographic assumptions (such as the intractability
of\nfactoring), the deterministic communication complexity of $f$
is\nhard to approximate to within some constant. Under stronger
(yet\narguably reasonable) assumptions,\nwe obtains even stronger
hardness results that match the best\nknown approximation.\n\nOn the
{\\em analytic} side, we present a family of functions for\nwhich
determining the communication complexity (or even
obtaining\nnon-trivial lower bounds on it) imply proving circuit lower
bounds\nfor some corresponding problems. Such connections between
circuit\ncomplexity and communication complexity were known
before\n(Karchmer-Wigderson 1988) only in the more involved context
of\nrelations (search problems) and not in the context of
functions\n(decision problems). This result, in particular, may
explain the\ndifficulty of analyzing the communication complexity of
certain\nfunctions such as the ``clique vs. independent-set'' family
of\nfunctions, introduced by Yannakakis (1988).
List Decoding Tensor Products and Interleaved codes
Parikshit Gopalan and Venkatesan Guruswami and Prasad Raghavendra
We design the first efficient algorithms and prove new combinatorial
bounds\n for list decoding tensor products of codes and interleaved
codes.\n\n We show that for {\\em every} code, the ratio of its list\n
decoding radius to its minumum distance stays unchanged under the\n
tensor product operation (rather than squaring, as one might\n
expect). This gives the first efficient list decoders and new\n
combinatorial bounds for some natural codes including multivariate\n
polynomials where the degree in each variable is bounded.\n\n We show
that for {\\em every} code, its list decoding radius\n remains
unchanged under $m$-wise interleaving for an integer\n $m$. This
generalizes a recent result of Dinur \\etal~\\cite{DGKS},\n who proved
such a result for interleaved Hadamard codes\n (equivalently, linear
transformations).\n\n Using the notion of generalized Hamming weights,
we give better\n list size bounds for {\\em both} tensoring and
interleaving of binary\n linear codes. By analyzing the weight
distribution of these codes,\n we reduce the task of bounding the list
size to bounding the number of \n close-by low-rank codewords. For
decoding linear transformatons, using\n rank-reduction together with
other ideas, we obtain list size bounds that\n are tight over small
fields.\n\nOur results give better bounds on the list decoding\nradius
than what is obtained from the Johnson bound, and yield
rather\ngeneral families of codes decodable beyond the Johnson bound.
Non-Malleability Amplification
Huijia Lin and Rafael Pass
We show a technique for amplifying commitment schemes that are
non-malleable with respect to identities of length t, into ones that
are non-malleable with respect to identities of length \\Omega(2^t),
while only incurring a constant overhead in round-complexity. As a
result we obtain a construction of O(1)^{log^* n}-round (i.e.,
"essentially" constant-round) non-malleable commitments from any
one-way function, and using a black-box proof of security.
Max Cut and the Smallest Eigenvalue
Luca Trevisan
We describe a new approximation algorithm for Max Cut. Our algorithm
runs in $\\tilde O(n^2)$ time, where $n$ is the number of vertices,
and achieves an approximation ratio of $.531$. On instances in which
an optimal solution cuts a $1-\\epsilon$ fraction of edges, our
algorithm finds a solution that cuts a $1-4\\sqrt{\\epsilon} +
8\\epsilon-o(1)$ fraction of edges.\n\nOur main result is a variant of
spectral partitioning, which can be implemented in nearly linear
time. Given a graph in which the Max Cut optimum is a $1-\\epsilon$
fraction of edges, our spectral partitioning algorithm finds a set $S$
of vertices and a bipartition $L,R=S-L$ of $S$ such that at least a
$1-O(\\sqrt \\epsilon)$ fraction of the edges incident on $S$ have one
endpoint in $L$ and one endpoint in $R$. (This can be seen as an
analog of Cheeger's inequality for the smallest eigenvalue of the
adjacency matrix of a graph.) Iterating this procedure yields the
approximation results stated above.\n\nA different, more complicated,
variant of spectral partitioning leads to an $\\tilde O(n^3)$ time
algorithm that cuts $1/2 + e^{-\\Omega(1/\\eps)}$ fraction of edges in
graphs in which the optimum is $1/2 + \\epsilon$.
Artin automorphisms, Cyclotomic function fields, and Folded list-decodable codes
Venkatesan Guruswami
Algebraic codes that achieve list decoding capacity were
recently\nconstructed by a careful "folding" of the Reed-Solomon
code. The\n"low-degree" nature of this folding operation was crucial
to the list\ndecoding algorithm. We show how such folding schemes
conducive to list\ndecoding arise out of the Artin-Frobenius
automorphism at primes in\nGalois extensions. Using this approach, we
construct new folded\nalgebraic-geometric codes for list decoding
based on cyclotomic\nfunction fields with a cyclic Galois group. Such
function fields are\nobtained by adjoining torsion points of the
Carlitz action of an\nirreducible $M \\in \\F_q[T]$. The Reed-Solomon
case corresponds to the\nsimplest such extension (corresponding to the
case $M=T$). In the\ngeneral case, we need to descend to the fixed
field of a suitable\nGalois subgroup in order to ensure the existence
of many degree one\nplaces that can be used for encoding.\n\nOur
methods shed new light on algebraic codes and their list
decoding,\nand lead to new codes achieving list decoding
capacity.\nQuantitatively, these codes provide list decoding (and
list\nrecovery/soft decoding) guarantees similar to folded
Reed-Solomon\ncodes but with an alphabet size that is only
polylogarithmic in the\nblock length. In comparison, for folded RS
codes, the alphabet size is\na large polynomial in the block
length. This has applications to\nfully explicit (with no brute-force
search) binary concatenated codes\nfor list decoding up to the Zyablov
radius.
The Extended BG-Simulation and the Characterization of $t$-Resiliency
Eli Gafni
A distributed task $T$ on $n$ processors is an input/output relation
between a collection of processors' \ninputs and outputs. While all
tasks are solvable if no processor will ever crash, the FLP result
\nshows that the task of consensus is not solvable even if just a
single processor may crash.\nWhat tasks are solvable if at most $ti$.\nSimulator $s_i$ outputs
for $p_i$ as well as for a set of processors $PO_i$ of size $n-(t+1)$
where for all $p_j \\in PO_i,~j>i$.\nAny processor whose output is in
$PO_i$, its input is given to some simulator.\nThe input/output of the
simulators have to be a projection of a single original input/output
tuple-pair\nin $T$ of a size that correspond to all the given
inputs.\n\n\nWe demonstrate the convenience that the characterization
provides, in two ways. \nFirst, we prove a new impossibility result:
We show that $n$ processes can solve $t$-resiliently weak renaming
with $n+(t+1)-2$ names, \nwhere n>1 and 01 and n>2, is undecidable, by a simple
reduction to the undecidability of the wait-free solvability of
3-processors tasks.
Finding Sparse Cuts Locally Using Evolving Sets
Reid Andersen and Yuval Peres
A {\\em local graph partitioning algorithm} finds a set of vertices
with\nsmall conductance (i.e. a sparse cut) by adaptively exploring
part of a\nlarge graph $G$, starting from a specified vertex. For the
algroithm to be\nlocal, its complexity must be bounded in terms of the
size of the set that\nit outputs, with at most a weak dependence on
the number $n$ of vertices in\n$G$. Previous local partitioning
algorithms find sparse cuts using random\nwalks and personalized
PageRank. In this paper, we introduce a randomized\nlocal
partitioning algorithm that finds a sparse cut by simulating the
{\\em\nvolume-biased evolving set process}, which is a Markov chain on
sets of\nvertices. We prove that for any set of vertices $A$ that has
conductance at\nmost $\\phi$, for at least half of the starting
vertices in $A$ our algorithm\nwill output (with probability at least
half), a set of conductance\n$O(\\phi^{1/2} \\log^{1/2} n)$. We prove
that for a given run of the\nalgorithm, the expected ratio between its
computational complexity and the\nvolume of the set that it outputs is
$O(\\phi^{-1/2} polylog(n))$. In\ncomparison, the best previous local
partitioning algorithm, due to Andersen,\nChung, and Lang, has the
same approximation guarantee, but a larger ratio of\n$O(\\phi^{-1}
polylog(n)$ between the complexity and output volume.\nUsing our local
partitioning algorithm as a subroutine, we construct a fast\nalgorithm
for finding balanced cuts. Given a fixed value of $\\phi$,
the\nresulting algorithm has complexity
$O(m+n\\phi^{-1/2})*polylog(n)$ and returns a\ncut with conductance
$O(\\phi^{1/2} \\log^{1/2} n)$ and volume at least\n$v_{\\phi}/2$,
where $v_{\\phi}$ is the largest volume of any set with\nconductance
at most $\\phi$.
A Deterministic Reduction for the Gap Minimum Distance Problem
Qi Cheng and Daqing Wan
Determining the minimum distance of a linear code is one of the\nmost
important problems in algorithmic coding theory. The exact\nversion of
the problem was shown to be NP-complete in\n\\cite{Vardy97}. In
\\cite{DumerMi03}, the gap version of the problem\nwas shown to be
NP-hard for any constant factor under a randomized\nreduction. In this
paper, we derandomize the reduction and thus\nprove that there is no
deterministic polynomial time algorithm to\napproximate the minimum
distance to any constant factor unless\n$P=NP$. We also prove that the
minimum distance is not approximable\nin deterministic polynomial time
to the factor $2^{\\log^{1-\\epsilon}\nn}$ unless $NP \\subseteq
DTIME(2^{polylog(n)})$. As the main\ntechnical contribution, for any
constant $2/3< \\rho < 1$, we present\na deterministic algorithm that
given a positive integer $s$, runs in\ntime $poly(s)$ and constructs a
code $\\code$ of length $poly(s)$\nwith an explicit Hamming ball of
radius $\\rho d(\\code)$ such that\na projection at $s$ coordinates
sends the codewords in the ball\nsurjectively onto a linear subspace
of dimension $s$, where\n$d(\\code)$ denotes the minimum distance of
$\\code$.
How long does it take to catch a wild kangaroo?
Ravi Montenegro and Prasad Tetali
The discrete logarithm problem asks to solve for the exponent $x$,
given the generator $g$ of a cyclic group $G$\nand an element $h\\in
G$ such that $g^x=h$.\nWe give the first rigorous proof that Pollard's
Kangaroo method finds the discrete logarithm in expected time
$(3+o(1))\\sqrt{b-a}$ when the logarithm $x\\in[a,b]$,\nand
$(2+o(1))\\sqrt{b-a}$ when $x\\in_{uar}[a,b]$.\nThis matches the
conjectured time complexity and, rare among the analysis of algorithms
based on Markov chains, even the lead constants $2$ and $3$ are
correct.
An Improved Constant-Time Approximation Algorithm for Maximum Matchings
Yuichi Yoshida and Masaki Yamamoto and Hiro Ito
This paper studies approximation algorithms for problems on
degree-bounded graphs.\nLet $n$ and $d$ be the number of vertices and
the degree bound, respectively.\n\nThis paper presents an algorithm to
approximate the size of some maximal independent set with additive
error $\\epsilon n$ whose running time is $\\cmpmisx$.\nUsing this
algorithm, it also shows that there are approximation algorithms for
many other problems, e.g., the maximum matching problem, the minimum
vertex cover problem, and the minimum set cover problem, that run
exponentially faster than existing algorithms with respect to $d$ and
$\\frac{1}{\\epsilon}$.\n\nIts approximation algorithm for the maximum
matching problem can be transformed to a testing algorithm for the
property of having a perfect matching with two-sided error.\nOn the
contrary, it also shows that every one-sided error tester for the
property requires at least $\\Omega(n)$ queries.
A competitive algorithm for minimizing weighted flow time on unrelated machines with speed augmentation
Jivitej S. Chadha and Naveen Garg and Amit Kumar and V. N. Muralidhara
We consider the online problem of scheduling jobs on unrelated
\nmachines so as to minimize the total weighted flow time. This
problem \nhas an unbounded competitive ratio even for very restricted
settings. In \nthis paper we show that if we allow the machines of the
online algorithm \nto have $\\epsilon$ more speed than those of the
offline algorithm then \nwe can get an $O(\\epsilon^{-2})$-competitive
algorithm.\n\nOur algorithm schedules jobs preemptively but without
migration. \nHowever, we compare our solution to an offline algorithm
which allows \nmigration. Our analysis uses a potential function
argument which can \nalso be extended to give a simpler and better
proof of the randomized \nimmediate dispatch algorithm of
Chekuri-Goel-Khanna-Kumar for minimizing \naverage flow time on
parallel machines.
Randomly supported independence and resistance
Per Austrin and Johan Håstad
We prove that for any positive integer $k$, there is a constant $c_k$
such that a randomly selected set of $c_k n^k \\log n$ Boolean vectors
with high probability supports a balanced $k$-wise independent
distribution. In the case of $k\\leq 2$ a more elaborate argument
gives the stronger bound $c_k n^k$. Using a recent result by Austrin
and Mossel this shows that a predicate on $t$ bits, chosen at random
among predicates accepting $c_2 t^2$ input vectors, is, assuming the
Unique Games Conjecture, likely to be approximation resistant.
\n\nThese results are close to tight: we show that there are other
constants, $c_k'$, such that a randomly selected set of cardinality
$c_k' n^k$ points is unlikely to support a balanced $k$-wise
independent distribution and, for some $c>0$, a random predicate
accepting $ct^2/\\log t$ input vectors is non-trivially approximable
with high probability. \n\nIn a different application of the result
of Austrin and Mossel we prove that, again assuming the Unique Games
Conjecture, {\\em any} predicate on $t$ bits accepting at least
$(59/60) \\cdot 2^t$ inputs is approximation resistant.\n\nEssentially
all results extend from the Boolean domain to larger finite domains.
Random Graphs and the Parity Quantifier
Phokion Kolaitis, Swastik Kopparty
The classical zero-one law for first-order logic on random graphs says
that for every first-order property $\\varphi$ in the theory of graphs
and every $p \\in (0,1)$, the probability that the random graph $G(n,
p)$ satisfies $\\varphi$ approaches either $0$ or $1$ as $n$
approaches infinity. It is well known that this law fails to hold for
any formalism that can express the parity quantifier: for certain
properties, the probability that $G(n,p)$ satisfies the property need
not converge, and for others the limit may be strictly between $0$ and
$1$.\n\nIn this work, we capture the limiting behavior of properties
definable in first order logic augmented with the parity quantifier,
$\\FOP$, over $G(n,p)$, thus eluding the above hurdles. Specifically,
we establish the following ``modular convergence
law":\n\\begin{quote}\nFor every $\\FOP$ sentence $\\varphi$, there
are two explicitly computable rational numbers $a_0$, $a_1$, such that
for $i \\in \\{0,1\\}$, as $n$ approaches infinity, the probability
that the random graph $G(2n+i, p)$ satisfies $\\varphi$ approaches
$a_i$.\n\\end{quote}\nOur results also extend appropriately to $\\FO$
equipped with $\\Mod_q$ quantifiers for prime $q$.\n\nIn the process
of deriving the above theorem, we explore a new question that may be
of interest in its own right. Specifically, we study the joint
distribution of the subgraph statistics modulo $2$ of $G(n,p)$:
namely, the number of copies, mod $2$, of a fixed number of graphs
$F_1, \\ldots, F_\\ell$ of bounded size in $G(n,p)$. We first show
that every $\\FOP$ property $\\varphi$ is almost surely determined by
subgraph statistics modulo $2$ of the above type. Next, we show that
the limiting joint distribution of the subgraph statistics modulo $2$
depends only on $n \\mod 2$, and we determine this limiting
distribution completely. Interestingly, both these steps are based on
a common technique using multivariate polynomials over finite fields
and, in particular, on a new generalization of the Gowers norm that we
introduce.\n\nThe first step above is analogous to the
Razborov-Smolensky method for lower bounds for $\\mathsf{AC0}$ with
parity gates, yet stronger in certain ways. For instance, it allows us
to obtain examples of simple graph properties that are exponentially
uncorrelated with every $\\FOP$ sentence, which is something that is
not known for $\\AC0[\\oplus]$.
Differential Privacy and Robust Statistics
Cynthia Dwork and Jing Lei
In the past several years a promising approach to private data
analysis\nhas emerged, based on the notion of {\\em differential
privacy}. Informally, this ensures that any outcome of an analysis is
``roughly as likely'' to\noccur independent of whether any individual
opts in to, or to opts out of, the database. In consequence, the
specific data of any one individual can never greatly affect the
outcome of the analysis, ensuring privacy regardless\nof any
additional (``auxiliary'') information known to the analyst.\nGeneral
techniques for ensuring differential privacy have now been\nproposed,
and many datamining tasks can be carried out in a differentially
private fashion, frequently with very accurate results.\n\nIn this
work we explore privacy-preserving parameter
estimation.\nPrivacy-preserving statistics would appear to be
connected to {\\em\nrobust statistics}, the subfield of statistics
that attempts to cope\nwith several small errors due, for example, to
rounding errors in\nmeasurements, as well as a few arbitrarily wild
errors occurring, say,\nfrom data entry failures. In consequence, in
a robust analysis the\nspecific data for any one individual should not
greatly affect the\noutcome of the analysis. It would therefore seem
that robust statistical estimators, or procedures, might form a
starting point for designing accurate\ndifferentially private
statistical estimators, and the approach based on influence functions
(Hampel et al.) is particularly suggestive of differential privacy.
We report here on a successful attempt to instantiate this
intuition.\n\nOur algorithms follow a new paradigm for differentially
private\nmechanisms, which we call Propose-Test-Release (PTR). All
previous\ndifferentially private algorithms are special cases of
this\nframework. We give general composition theorems for PTR
mechanisms,\nextending and refining previous analyses.
Intrinsic Robustness of the Price of Anarchy
Tim Roughgarden
The price of anarchy is a worst-case measure of the inefficiency\nof
selfish behavior, defined as the ratio of the objective
function\nvalue of a worst Nash equilibrium of a game and that of an
optimal\noutcome. The measure does, however, assume that players
successfully\nreach some Nash equilibrium. This fact motivates the
search for\ninefficiency guarantees that apply more generally to
weaker notions of\nequilibria (such as mixed Nash and correlated
equilibria), or to\nsequences of outcomes generated by natural
experimentation strategies\n(such as successive best responses or
simultaneous\nregret-minimization). Recently, researchers have
identified a number\nof settings in which the known quantitative
bounds on the price of\nanarchy carry over to other equilibrium
notions and/or the results of\nnatural dynamics.\n\nThis paper proves
a fundamental connection between the price of\nanarchy and its
seemingly stronger relatives in classes of games with\na sum
objective, such as congestion games and variants thereof:\nan upper
bound on the worst-case price of anarchy for pure Nash\nequilibria
{\\em necessarily} implies the exact same worst-case upper\nbound for
mixed Nash equilibria, correlated equilibria, and the\naverage
objective function value of regret-minimizing players (or\n"price of
total anarchy"). In this sense, price of anarchy guarantees \nin such
games are "intrinsically robust".\nUnder mild additional assumptions,
our proof techniques also\nautomatically yield performance guarantees
for various forms of\nbest-response dynamics, as well as bicriteria
inefficiency bounds.\n\nByproducts of our work include several
significant new results for the\ninefficiency of equilibria in
congestion games: tight bounds on the\nprice of anarchy for all (not
necessarily polynomial) cost functions;\nbicriteria bounds; and a
structural characterization of games that are\nuniversal worst-case
examples for the POA.
On the convergence of regret minimization dynamics in concave games
Eyal Even-Dar and Yishay Mansour and Uri Nadav
We study a general sub-class of concave games, which we call socially
concave games. We show that if each player follows any no-external
regret minimization procedure then the dynamics converges in the sense
that both the average action vector converges to a Nash equilibrium
and that the utility of each player converges to her utility in that
Nash equilibrium.\n\nWe show that many natural games are socially
concave games. Specifically, we show that linear Cournot competition
and linear resource allocation games are socially-concave games, and
therefore our convergence result applies to them. In addition, we show
that a simple best response dynamic might diverge for linear resource
allocation games, and is known to diverge for a linear Cournot
competition. For the TCP congestion games we show that ``near'' the
equilibrium these games are socially-concave, and using our general
methodology we show convergence of specific regret minimization
dynamics.
CSP Gaps and Reductions in the Lasserre Hierarchy
Madhur Tulsiani
We study integrality gaps for SDP relaxations of constraint
satisfaction problems, in the hierarchy of SDPs defined by
Lasserre. Schoenebeck recently showed the first integrality gaps for
these problems, showing that for MAX k-XOR, the ratio of the integer
optimum to the SDP optimum may be as small as 1/2 even after
$\\Omega(n)$ rounds of the Lasserre hierarchy.\n\nWe show that for the
general MAX k-CSP problem over binary domain, the ratio of the value
achieved by the optimal assignment to the SDP optimum, can be as small
as $2k/2^k + \\epsilon$ even after $\\Omega(n)$ rounds of the Lasserre
hierarchy. For alphabet size $q$ which is a prime, we show that the
ratio for MAX k-CSP$_q$ is at most $kq(q-1)/q^k + \\epsilon$ even
after $\\Omega(n)$ rounds. The method of proof also gives optimal
integrality gaps for a predicate chosen at random.\n\nWe also explore
how to translate integrality gaps for CSPs into integrality gaps
for\nother problems using reductions. Starting from the gaps for CSPs,
we establish SDP \ngaps for Maximum Independent Set, Approximate Graph
Coloring, Chromatic Number and \nVertex Cover. For Independent Set and
Chromatic Number, we show that the integrality gap can be as large as
n/2^{O(\\sqrt{\\log n \\log\\log n})} even after
2^{\\Omega(\\sqrt{\\log n \\log log n})} rounds. In case of
Approximate Graph Coloring, for every constant l, we show examples of
graphs which admit a vector l-coloring for the SDP obtained by
\\Omega(n) rounds of Lasserre, even though the actual chromatic number
is \\Omega(2^{l/2}/l^2). For Vertex Cover, we show an integrality gap
of 1.36 for \\Omega(n^{\\delta}) rounds, for a small constant
$\\delta$.\n\nThe results for CSPs provide the first examples of
$\\Omega(n)$ round integrality gaps matching hardness results known
only under the Unique Games Conjecture. This and some additional
properties of the integrality gap instance, allow us to prove gaps for
Independent Set and Chromatic Number which are stronger than the
NP-hardness results known even under the Unique Games Conjecture.
A Constant-Factor Approximation for Stochastic Steiner Forest
Anupam Gupta and Amit Kumar
We consider the stochastic Steiner forest problem: here\nwe were given
a collection of Steiner forest instances, and\nwere guaranteed that a
random one of these instances would appear\ntomorrow; moreover, the
cost of edges tomorrow will be $\\lambda$ times\nthe cost of edges
today. Which edges should we buy today so that we\ncan extend it to a
solution for the instance arriving tomorrow, to\nminimize the total
cost? While very general results have been\ndeveloped for many
problems in stochastic discrete optimization over\nthe past years, the
approximation status of the stochastic Steiner\nForest problem has
remained tantalizingly open, with considerable\neffort yielding
constant-factor approximations only for some special\ncases of this
problem.\n\nWe resolve the status of the stochastic Steiner Forest
problem and\ngive a constant-factor primal-dual based approximation
algorithm.\nSince the previously-solved special cases had yielded very
complicated\nprimal-dual algorithms, our solution heavily relies on
techniques to\nreduce the complexity of the dual-growth
process. Indeed, a major\ntechnical contribution of our work is to
show how to take an optimal\nsolution to the linear program and use it
to factor the problem into\ntwo simpler parts---and how to grow duals
and primals for each of\nthese parts in a non-smooth (but
well-controlled fashion) to generate\na constant-factor approximation.
On the geometry of graphs with a forbidden minor
James R. Lee and Anastasios Sidiropoulos
We study the topological simplification of graphs via random
embeddings,\nleading ultimately to a reduction of the
Gupta-Newman-Rabinovich-Sinclair (GNRS) L_1 embedding conjecture to a
pair of manifestly simpler conjectures. The GNRS conjecture
characterizes all graphs that have an
$O(1)$-approximate\nmulti-commodity max-flow/min-cut theorem. In
particular, its resolution would imply a constant factor approximation
for the general Sparsest Cut problem\nin every family of graphs which
forbids some minor. In the course of our study, we prove a number of
results of independent interest.\n\n- Every metric on a graph of
pathwidth $k$ embeds into a distribution over\ntrees with distortion
depending only on $k$. This is equivalent to the statement that any
family of graphs excluding a forest embeds into a distribution over
trees. For graphs of {\\em treewidth} $k$,\nGNRS showed that this is
impossible even for $k=2$. In particular, we\nshow that pathwidth-$k$
metrics embed into $L_1$ with bounded distortion, which\nresolves an
open question even for $k=3$.\n\n- We prove a generic {\\em peeling
lemma} which uses random Whitney\ndecompositions to peel simple
structures like handles and apices off of\ngraphs. This allows a
number of new topological reductions. For example, if $X$ is any
metric space in which the removal of O(1) points leaves a bounded
genus metric, then $X$ embeds into a distribution over planar
graphs.\n\n- Using these techniques, we show that the GNRS embedding
conjecture is\nequivalent to two simpler conjectures: (1) The
well-known planar embedding\nconjecture, and (2) a conjecture about
embeddings of $k$-sums of graphs.
Resilient Knowledge-Based Mechanisms For Truly Combinatorial Auctions (and Implementation in Surviving Strategies)
Jing Chen and Silvio Micali
We put forward a new mechanism achieving a high benchmark for (both
revenue and) the sum of revenue and efficiency, in truly combinatorial
auctions. Notably, our mechanism guarantees its performance (1) in a
very adversarial collusion model; (2) for any profile of strategies
surviving the iterated elimination of dominated strategies; and (3) by
leveraging the knowledge that the players have about each other (in a
non-Bayesian setting).\n\nOur mechanism also is computationally
efficient, and preserves the players' privacy to an unusual extent.
Public-Key Cryptosystems from the Worst-Case Shortest Vector Problem
Chris Peikert
We construct public-key cryptosystems that are secure assuming
the\n\\emph{worst-case} hardness of approximating the length of a
shortest\nnonzero vector in an $n$-dimensional lattice to within a
small\n$\\poly(n)$ factor. Prior cryptosystems with worst-case
connections\nwere based either on the shortest vector problem for a
\\emph{special\n class} of lattices (Ajtai and Dwork, STOC 1997;
Regev, J.~ACM 2004),\nor on the conjectured hardness of lattice
problems for \\emph{quantum}\nalgorithms (Regev, STOC 2005).\n\nOur
main technical innovation is a reduction from certain variants of\nthe
shortest vector problem to corresponding versions of the\n``learning
with errors'' ($\\lwe$) problem; previously, only a\n\\emph{quantum}
reduction of this kind was known. In addition, we\nconstruct new
cryptosystems based on the \\emph{search} version of\n$\\lwe$,
including a very natural \\emph{chosen ciphertext-secure}\nsystem that
has a much simpler description and tighter underlying\nworst-case
approximation factor than prior constructions.
One-Round Authenticated Key Agreement from Weak Secrets
Yevgeniy Dodis and Daniel Wichs
We study the question of information-theoretically secure
authenticated key agreement from weak secrets. In this setting, Alice
and Bob share a $n$-bit secret $W$, which might \\emph{not} be
uniformly random but the adversary has at least $k$ bits of
uncertainty about it (formalized using conditional min-entropy). Alice
and Bob wish to use $W$ to agree on a nearly uniform secret key $R$,
over a public channel controlled by an \\emph{active} adversary
Eve. We show that non-interactive (single-message) protocols do not
work when $k\\le \\frac{n}{2}$, and require poor parameters even when
$\\frac{n}{2}1$ and every undirected, weighted
graph\n$G(V,E)$ on $n$ vertices, there exists a weighted subgraph $H$
of $G$\nwith at most $dn$ edges that preserves the Laplacian quadratic
form of\n$G$ up to a multiplicative error of $d+1 \\pm 2\\sqrt{d}$.
Thus, $H$\napproximates $G$ spectrally at least as well as a Ramanujan
expander\nwith $dn/2$ edges approximates the complete graph.\n\nWe
give an elementary, deterministic polynomial time algorithm
for\nconstructing $H$.
Conditional Hardness for Satisfiable-3CSPs
Ryan O'Donnell and Yi Wu
In this paper we study a fundamental open problem in the area of
probabilistic checkable proofs: \nWhat is the smallest s such that NP
\\subseteq naPCP_{1,s}[O(log n),3]?\n\nIn the language of hardness of
approximation, this problem is equivalent to determining the smallest
s such that getting an s-approximation for satisfiable 3-bit
constraint satisfaction problems ("3-CSPs") is NP-hard.\n\nThe
previous best upper bound and lower bound for s are 20/27+\\eps by
Khot and Saket [KS06], and 5/8 (assuming NP \\nsubseteq BPP) by Zwick
[Zwi98]. In this paper we close the gap assuming Khot's d-to-1
Conjecture. Formally, we prove that if Khot's d-to-1 Conjecture holds
for any finite constant integer d, then NP\\subseteq naPCP_{1,5/8+
eps}[O(log n),3] for any constant eps >0.\n\nOur conditional result
also solves Hastad's open question [Has01] on determining the
inapproximability of satisfiable Max-NTW ("Not Two") instances and
confirms Zwick's conjecture [Zwi98] that the 5/8-approximation
algorithm for satisfiable 3-CSPs is optimal.
Near-Perfect Load Balancing by Randomized Rounding
Tobias Friedrich and Thomas Sauerwald
We consider and analyze a new algorithm for balancing indivisible
loads on a distributed network with $n$ processors. The aim is
minimizing the discrepancy between the maximum and minimum load. In
every time-step paired processors balance their load as evenly as
possible. The direction of the excess token is chosen according to a
randomized rounding of the participating loads.\n\nWe prove that in
comparison to the corresponding model of Rabani, Sinclair, and Wanka
with arbitrary roundings, the randomization yields an improvement of
roughly a square root of the achieved discrepancy in the same number
of time-steps on all graphs. For the important case of expanders we
can even achieve a constant discrepancy in $O(\\log n \\,(\\log\\log
n)^3)$ rounds. This is optimal up to $\\log\\log n$-factors while the
best previous algorithms in this setting either require
$\\Omega(\\log^2 n)$ time or can only achieve a logarithmic
discrepancy. Our new result also demonstrates that with randomized
rounding the difference between discrete and continuous load balancing
vanishes almost completely.
Homology Flows, Cohomology Cuts
Erin W. Chambers and Jeff Erickson and Amir Nayyeri
We describe the first algorithms to compute maximum flows in
surface-embedded graphs in near-linear time. Specifically, given an
undirected graph embedded on an orientable surface of genus~$g$, with
two specified vertices $s$ and $t$, we can compute a maximum
$(s,t)$-flow in $O(g^7 n\\log^2 n\\log^2 C)$ time for integer
capacities that sum to $C$, or in $(g\\log n)^{O(g)} n$ time for real
capacities. Except for the special case of planar graphs, for which
an $O(n\\log n)$-time algorithm has been known for 20 years, the best
previous time bounds for maximum flows in surface-embedded graphs
follow from algorithms for general sparse graphs. Our key insight is
to optimize the relative homology class of the flow, rather than
directly optimizing the flow itself. A dual formulation of our
algorithm computes the minimum-cost cycle or circulation in a given
(real or integer) homology class.
Numerical Linear Algebra in the Streaming Model
Kenneth L. Clarkson and David P. Woodruff
We give near-optimal space bounds in the streaming model for linear
algebra problems that include estimation of matrix products, linear
regression, low-rank approximation, and approximation of matrix
rank. In the streaming model, sketches of input matrices are
maintained under updates of matrix entries; we prove results for
turnstile updates, given in an arbitrary order. We prove the first
lower bounds known for the space needed by the sketches, for a given
estimation error $\\epsilon$. We sharpen prior upper bounds, with
respect to combinations of space, failure probability, and number of
passes. The sketch we use for matrix $A$ is simply $S^TA$, where $S$
is a sign matrix.\n\nOur results include the following upper and lower
bounds on the bits of space needed for $1$-pass algorithms. Here $A$
is an $n\\times d$ matrix, $B$ is an $n\\times d'$ matrix, and $c :=
d+d'$. These results are given for fixed failure probability; for
failure probability $\\delta>0$, the upper bounds require a factor of
$\\log(1/\\delta)$ more space. We assume that the inputs have integer
entries specified by $O(\\log(nd))$ bits.\n\n\n (Matrix
Product)\nOutput matrix $C$ with\n $$||A^TB-C||_F \\leq \\epsilon
\\norm{A}_F\\norm{B}_F.$$\nWe show that
$\\Theta(c\\epsilon^{-2}\\log(nc))$ space is needed.\n\n (Linear
Regression)\nFor $d'=1$, so that $B$ is a vector $b$, find $x$ so
that\n $$\\norm{Ax-b}_2 \\leq (1+\\epsilon)\\min_{x' \\in
\\mathbb{R}^d} \\norm{Ax'-b}_2.$$\nWe show that\n
$\\Theta(d^2\\epsilon^{-1}\\log(nd))$ space is needed.\n\n (Rank-$k$
Approximation)\nFind matrix $\\tA_k$ of rank no more than $k$, so
that\n $$\\norm{A-\\tA_k}_F \\leq
(1+\\epsilon)\\norm{A-A_k}_F,$$\nwhere $A_k$ is the best rank-$k$
approximation to $A$. Our lower bound is\n
$\\Omega(k\\epsilon^{-1}(n+d)\\log(nd))$ space,\nand we give a
one-pass algorithm matching this when $A$ is given row-wise or
column-wise. For general updates, we give a one-pass algorithm
needing\n $O(k\\epsilon^{-2}(n + d/\\epsilon^2)\\log(nd))$\nspace. We
also give upper and lower bounds for multiple-pass algorithms, and a
sketching analog of the $CUR$ decomposition.
Testing Juntas Nearly Optimally
Eric Blais
A function on n variables is called a k-junta if it depends on at most
k of its variables. In this article, we show that it is possible to
test whether a function is a k-junta or is “far”
from being a k-junta with O(k log k)/epsilon queries, where epsilon is
the approximation parameter. This result improves on the previous best
upper bound of O(k^1.5 )/epsilon queries and is asymptotically
optimal, up to a logarithmic factor. \n\nWe obtain the improved upper
bound by introducing a new algorithm with one-sided error for testing
juntas. Notably, the new algorithm is a valid junta tester under very
general conditions: it holds for functions with arbitrary
ﬁnite domains and ranges, and it holds under any product
distribution over the domain.\n\nA key component of the analysis of
the algorithm is a new structural result on juntas: roughly, we show
that if a function f is “far” from being a
k-junta, then f is “far” from being determined
by k parts in a random partition. The structural lemma is proved using
the Efron-Stein decomposition method.
Linear-time Approximation Schemes for the Gale-Berlekamp Game and Related Minimization Problems
Marek Karpinski and Warren Schudy
We design a linear time approximation scheme for the Gale-Berlekamp
Switching Game and generalize it to a wider class of dense fragile
minimization problems including the Nearest Codeword Problem (NCP) and
Unique Games Problem. Further applications include, among other
things, finding a constrained form of matrix rigidity and maximum
likelihood decoding of an error correcting code. As another
application of our method we give the first linear time approximation
schemes for correlation clustering with a fixed number of clusters and
its hierarchical generalization. Our results depend on a new technique
of dealing with small objective function values of optimization
problems and could be of independent interest.
Direct Product Testing: Improved and Derandomized
Russell Impagliazzo and Valentine Kabanets and Avi Wigderson
The "direct product code" of a function f gives its values on all
possible\nk-tuples (f(x_1),...,f(x_k)). This basic construct underlies
"hardness\namplification" in cryptography, circuit complexity and
PCPs. A recent\nbreakthrough by Dinur and Goldenberg (FOCS'08) enabled
for the first time\n{\\em testing} proximity to this important code in
the "list-decoding"\nregime. In particular, they give a 2-query test
which works for success\nprobability 1/k^{\\alpha}, and show that no
such test works below success\nprobability 1/k.\n\nOur main result is
a 3-query test which works for exponentially small\nsuccess
probability exp (- k^{\\alpha}). Our techniques (which are based
on\nrecent simplified decoding algorithms for the same code
(Impagliazzo et al., STOC'08) also allow us to considerably simplify
the analysis of the 2-query test\nof [DG08]. Furthermore we are able
to derandomize it, allowing the code\nto be of polynomial rate
independent of k, and success probability\n1/k^{\\alpha}.\n\nFinally
we show the applicability of the new tests to PCPs. Starting\n with a
2-query PCP of alphabet $\\Sigma$ of size $2^t$ and soundness\n error
$1-\\delta$, Raz's ($k$-fold) parallel repetition theorem (and\n
Holenstein's proof) \\cite{Raz-parallel,Hol07} yields a new 2-query\n
PCP with soundness error $\\exp(-(\\delta^3k)/t)$, with the
dependence\n on the alphabet size essential
\\cite{Feige-Verbitsky}. Our\n techniques yield a 3-query PCP with
soundness error $\\exp(-\\delta\n k^{1/3})$. While using 3 queries and
having a worse exponential\n decay, our error is {\\em independent} of
the alphabet size!
An Axiomatic Approach to Algebrization
Russell Impagliazzo and Valentine Kabanets and Antonina Kolokolova
In this paper, building on the work of Arora, Impagliazzo,
and\nVazirani [AIV92], we propose an \\emph{axiomatic} approach
to\n``algebrization'', which complements and clarifies the
approaches\nof Fortnow [For94] and Aaronson/Wigderson [AW08]. We add
to the axiomatic\ncharacterization of the class P from [AIV92] a new
axiom,\n\\emph{Arithmetic Checkability}, and argue that, in a certain
sense,\nthe resulting theory provides a characterization of
``algebrizing''\ntechniques. In particular, we show the following:
\\emph{(i)}\nFortnow's algebraic oracles from [For94] all satisfy
Arithmetic\nCheckability (so arithmetic checkability holds relative
to\narbitrarily powerful oracles). \\emph{(ii)} Most of the
algebrizing\ncollapses and separations from [AW08], such as
IP=PSPACE,\n$NP \\subset ZKIP$ if one-way functions
exist,\n$MA\\text{-}EXP\\not\\subset P/poly$, etc., are
\\emph{provable} from\nArithmetic Checkability. \\emph{(iii)} Many of
the open complexity\nquestions (shown to require non-algebrizing
techniques in\n[AW08]), such as ``P vs. NP'', ``NP vs. BPP'',\netc.,
\\emph{cannot be proved} from Arithmetic Checkability.\n\\emph{(iv)}
Arithmetic Checkability is also insufficient to prove one\nknown
result, NEXP=MIP (although relative to an oracle
satisfying\nArithmetic Checkability, $NEXP^O$ restricted to
poly-length queries\nis contained in $MIP^O$, mirroring a similar
result from [AW08]).
Efficient discrete-time simulations of continuous-time quantum query algorithms
Richard Cleve and Daniel Gottesman and Michele Mosca and Rolando Somma and David Yonge-Mallo
Background:\nMost quantum algorithms can be cast in a query model,
where some unknown function is computed by a black-box, and the goal
is to determine some property of the function with as few queries as
possible. One example is the main component of Shor's factoring
algorithm, which essentially determines the period of an unknown
function that is promised to be strictly periodic. Grover's algorithm
is another example. The continuous-time query model is a variant of
the discrete query model in which queries can be interleaved with
known operations (called "driving operations") continuously in
time. Interesting algorithms have been discovered in this model, such
as an algorithm for evaluating NAND trees more efficiently than any
classical algorithm. Subsequent work has shown that there also exists
an efficient algorithm for NAND trees in the discrete query
model.\n\nNew result:\nAn open question has been whether any quantum
algorithm in the continuous-time query model can be efficiently
simulated by a quantum algorithm in the discrete query model. We show
that any quantum algorithm in the continuous-time query model whose
total query time is T can be simulated by a quantum algorithm in the
discrete query model that makes O(T log T / loglog T) queries. This is
the first upper bound that is independent of the driving operations
(i.e., it holds even if the norm of the driving Hamiltonian is very
large). A corollary is that any lower bound of T queries for a problem
in the discrete-time query model immediately carries over to a lower
bound of Omega(T loglog T/log T) in the continuous-time query model.
Tight Lower Bounds for Greedy Routing in Uniform Small World Rings
Martin Dietzfelbinger and Philipp Woelfel
We consider an augmented ring-based network with vertices 0,...,n-1,
where each vertex is connected to its left and right neighbor and
possibly to some further vertices (called long range contacts). The
outgoing edges of a vertex v are obtained by choosing a subset C of
{1,2,...,n-1} at random according to a probability distribution mu
over all subsets C of {1,2,...,n-1} with 1,n-1 in C (in a uniform
manner, i.e., the distribution is the same for each vertex) and then
connecting v to (v+i) mod n for each i in C by a unidirectional
link. The long range contacts are the links for i in
C-{1,n-1}.\n\nMotivated by Kleinberg's (2000) Small World Graph model,
and packet routing strategies for peer-to-peer networks, the greedy
routing algorithm on augmented rings, where each node routes the
package to the neighbor closest to the destination of the package has
been investigated thoroughly, both for the "one-sided case", where
packets can travel only in one direction through the interval between
starting and target node, and the "two-sided case", where there is no
such restriction.\n\nIt has been an open problem for some years to
determine the expected number of hops that are needed by the greedy
strategy to route a package between two randomly chosen vertices, for
the best possible probability distribution mu. Upper bounds and
partial lower bound results were e.g. given by Kleinberg (2000),
Aspnes et al. (2002), or by Giakkoupis and Hadzilacos (2007). Results
for the two-sided case were achieved only for very restricted classes
of distributions mu.\n \nIn this paper we resolve the lower bound
problem optimally for Omega(1) <= l <= O(log n), where l is the
expected number of long range contacts of a node. For both, the
two-sided and the one-sided case we prove a lower bound of
Omega((\\log n)^2/l) for an arbitrary distribution mu.\n\nOur lower
bounds were conjectured by Giakkoupis and Hadzilacos (2007). They
match the upper bounds of Aspnes et al. (2002), who for each l in the
considered range suggested a suitable distribution mu.
Sherali-Adams Relaxations of the Matching Polytope
Claire Mathieu and Alistair Sinclair
We study the Sherali-Adams lift-and-project hierarchy of linear
programming relaxations of the matching polytope. Our main result is
an asymptotically tight expression $1+1/k$ for the integrality gap
after $k$ rounds of this hierarchy. The result is derived by a
detailed analysis of the LP after $k$ rounds applied to the complete
graph $K_{2d+1}$. We give an explicit recurrence for the value of
this LP, and hence show that its gap exhibits a ``phase transition,"
dropping from close to its maximum value $1+\\frac{1}{2d}$ to close to
$1$ around the threshold $k=2d-\\sqrt{d}$. We also show that the
{\\it rank\\/} of the matching polytope (i.e., the number of
Sherali-Adams rounds until the integer polytope is reached) is exactly
$2d-1$.
Mixing time for the solid-on-solid model
Fabio Martinelli and Alistair Sinclair
We analyze the mixing time of a natural local Markov chain (the
Glauber dynamics) on configurations of the solid-on-solid model of
statistical physics. This model has been proposed, among other
things, as an idealization of the behavior of contours in the Ising
model at low temperatures. Our main result is an upper bound on the
mixing time of $\\Otilde(n^{3.5})$, which is tight within a factor of
$\\Otilde(\\sqrt{n})$. The proof, which in addition gives some
insight into the actual evolution of the contours, requires the
introduction of a number of novel analytical techniques that we
conjecture will have other applications.
The Detectability Lemma and Quantum Gap Amplification
Dorit Aharonov and Itai Arad and Zeph Landau and Umesh Vazirani
The quantum analog of a constraint satisfaction problem is a sum\n of
local Hamiltonians - each (term of the) Hamiltonian specifies a\n
local constraint whose violation contributes to the energy of the\n
given quantum state. Formalizing the intuitive connection between\n
the ground (minimal) energy of the Hamiltonian and the\n
\\emph{minimum} number of violated constraints is problematic,\n since
the number of constraints being violated is not well defined\n when
the terms in the Hamiltonian do not commute. The\n detectability lemma
proved in this paper provides precisely such a\n quantitative
connection. We apply the lemma to derive a quantum\n analogue of the
classical gap amplification lemma of random walks\n on expander
graphs. The quantum gap amplification lemma holds for\n local
Hamiltonians with expander interaction graphs. Our proofs\n are based
on a novel structure imposed on the Hilbert space, which\n we call the
XY decomposition, which enables a reduction from the\n quantum
non-commuting case to the commuting case (where many\n classical
arguments go through).\n\n The results may have several interesting
implications. First,\n proving a quantum analogue to the PCP theorem
is one of the most\n important challenges in quantum complexity
theory. Our quantum gap\n amplification lemma may be viewed as the
quantum analogue of the\n first of the three main steps in Dinur's PCP
proof. Quantum gap\n amplification may also be related to fault
tolerance of adiabatic\n computation, a model which has attracted much
attention but for\n which no fault tolerance theory was derived
yet. Thirdly, the\n detectability lemma, and the XY decomposition
provide a handle on\n the structure of local Hamiltonians and their
ground states. This\n may prove useful in the study of those important
objects, in\n particular in the fast growing area of ``quantum
Hamiltonian\n complexity'' connecting quantum complexity to condensed
matter\n physics.
MaxMin allocation via degree lower-bounded arborescences
MohammadHossein Bateni and Moses Charikar and Venkatesan Guruswami
We consider the problem of MaxMin allocation of indivisible
goods.\nThere are m items to be distributed among n players. Each
player i has\na nonnegative valuation p_{ij} for an item j, and the
goal is to allocate\neach item to a player so as to maximize the
minimum total valuation received\nby a player. There is a large gap in
our understanding of this problem.\nThe best known positive result is
an $\\tilde O(\\sqrt n)$-approximation\nalgorithm, while there is only
a factor 2 hardness known.\n\nBetter algorithms are known for the
restricted assignment case where each\nitem has exactly one nonzero
value for the players. We study the effect\nof bounded degree of
items: each item has a nonzero value for at most\nB players. We show
that essentially, the case B=3 is equivalent to the\ngeneral case, and
give a 4-approximation algorithm for B=2.\n\nThe current algorithmic
results for MaxMin Allocation are based\non a complicated LP
relaxation called the configuration LP. We\npresent a much simpler LP
which is equivalent in power to the\nconfiguration LP. We focus on a
special case of MaxMin Allocation\n-- a family of instances on which
this LP has a polynomially large gap.\nThe technical core of our
result for this case comes from an algorithm\nfor an interesting new
optimization problem on directed graphs:\nMaxMinDegree Arboresence,
where the goal is to produce an\narboresence of large outdegree. We
develop an n^\\eps approximation\nfor this problem that runs in
n^{O(1/\\eps)} time and obtain \na polylogarithmic approximation that
runs in quasipolynomial time, using a\nlift-and-project inspired LP
formulation.\nIn fact, we show that our results imply a rounding
algorithm for the \nrelaxations obtained by t rounds of the
Sherali-Adams hierarchy \napplied to a natural LP relaxation for the
problem. Roughly\nspeaking, the integrality gap of the relaxation
obtained from t rounds\nof Sherali-Adams is at most n^{1/t}.
Inaccessible Entropy
Iftach Haitner and Omer Reingold and Salil Vadhan and Hoeteck Wee
We put forth a new computational notion of entropy, which measures the
\n(in)feasibility of sampling high entropy strings that are consistent
with a given protocol. Specifically, we say that the i’th
round of a protocol (A,B) has {\\em accessible entropy} at most k, if
no polynomial-time strategy A^* can generate messages for A such that
the entropy of its message in the i’th round has entropy
greater than k when conditioned both on prior messages of the protocol
and on prior coin tosses of A^*.\n \nAs applications of this notion,
we\n- Give a much simpler and more efficient construction of
statistically hiding commitment schemes from arbitrary one-way
functions, and\n- Prove that constant-round statistically hiding
commitments are necessary for constructing constant-round
zero-knowledge proof systems for NP that remain secure under parallel
composition (assuming the existence of one-way functions).
Hadwiger's Conjecture is Decidable
Ken-ichi Kawarabayashi and Bruce Reed
The famous Hadwiger's conjecture asserts that every graph with no
$K_{t}$-minor\nis $(t-1)$-colorable. The case $t=5$ is known to be
equivalent to\nthe Four Color Theorem by Wagner, and the case $t=6$ is
settled by\nRobertson, Seymour and Thomas. So far the cases $t \\geq
7$ are wide open.\n\nIn this paper, we prove the following two
theorems:\n\n\\begin{enumerate}\n\\item\nThere is an $O(n^2)$
algorithm to decide Hadwiger's conjecture for\nthe case
$t$.\n\\item\nEvery minimal counterexample to Hadwiger's conjecture
for the case $t$ has at most $f(t)$ vertices for some explicit bound
$f(t)$.\n\\end{enumerate}\n\nThe bound $f(t)$ is at most
$p^{p^{p^t}}$, where $p=10^{10^{10^t}}$. Our proofs for both
results\nuse the well-known result by Thomassen for 5-list-coloring
planar graphs, together with some\nresults (but not the decomposition
theorem) of Graph Minors. \n\nConcerning the first result, we prove
the following stronger theorem:\n\n\\medskip\n\nFor a given graph $G$
and any fixed $t$, there is an $O(n^2)$ algorithm to\noutput one of
the following:\n\n\\begin{enumerate}\n\\item\na $(t-1)$-coloring of
$G$, or\n\\item\na $K_{t}$-minor of $G$, or\n\\item\na minor $H$ of
$G$ of order at most $f(t)$ such that\n$H$ does not have a
$K_{t}$-minor nor is $(t-1)$-colorable.\n\\end{enumerate}\n\nThe last
conclusion implies that $H$ is a counterexample to Hadwiger's
conjecture with\nat most $f(t)$ vertices.\nThe time complexity of the
algorithm matches the best known algorithms for 4-coloring\nplanar
graphs (the Four Color Theorem), due to Appel and Hakken, and
Robertson, Sanders, Seymour and Thomas, respectively. Let us observe
that when $t=5$, the algorithm implies the Four Color Theorem.
A Unified Framework for Concurrent Security: Universal Composability from Stand-alone Non-malleability
Huijia Lin and Rafael Pass and Muthuramakrishnan Venkitasubramaniam
We present a unified framework for obtaining Universally Composable
(UC) protocols by relying on stand-alone secure non-malleable
commitments. Essentially all results on concurrent secure
computation---both in relaxed models (e.g., quasi-polynomial time
simulation), or with trusted set-up assumptions (e.g., the CRS model,
the imperfect CRS model, or the timing model)---are obtained as
special cases of our framework. This not only leads to conceptually
simpler solutions, but also to improved set-up assumptions,
round-complexity, and computational assumptions.\n\nAdditionally, this
framework allows us to consider new relaxed models of security: we
show that UC security where the adversary is a uniform PPT but the
simulator is allowed to be a non-uniform PPT (i.e., essentially,
traditional UC security, but with a non-uniform reduction) is possible
without any trusted set-up. This gives the first results on concurrent
secure computation without set-up, which can be used for securely
computing ``computationally-sensitive'' functionalities (e.g.,
data-base queries, ``proof of work''-protocols, or playing bridge on
the Internet).
Multiplicative updates outperform generic no-regret learning in congestion games
Robert Kleinberg and Georgios Piliouras and Eva Tardos
We study the outcome of natural learning algorithms in atomic
congestion games. Atomic congestion games have a wide variety of
equilibria often with vastly differing social costs. We show that in
almost all such games, the well-known multiplicative-weights learning
algorithm results in convergence to pure equilibria. Our results show
that natural learning behavior can avoid bad outcomes predicted by the
price of anarchy in atomic congestion games such as the load-balancing
game introduced by Koutsoupias and Papadimitriou, which has
super-constant price of anarchy and has correlated equilibria that are
exponentially worse than any mixed Nash equilibrium.\n\nOur results
identify a set of mixed Nash equilibria that we call weakly stable
equilibria. Our notion of weakly stable is defined game-theoretically,
but we show that this property holds whenever a stability criterion
from the theory of dynamical systems is satisfied. This allows us to
show that in every congestion game, the distribution of play converges
to the set of weakly stable equilibria. Pure Nash equilibria are
weakly stable, and we show using techniques from algebraic geometry
that the converse is true with probability 1 when congestion costs are
selected at random independently on each edge (from any monotonically
parametrized distribution). We further extend our results to show that
players can use algorithms with different (sufficiently small)
learning rates, i.e. they can trade off convergence speed and long
term average regret differently.
Affiliation Networks
Silvio Lattanzi and D. Sivakumar
In the last decade, structural properties of several naturally
arising\nnetworks (the Internet, social networks, the web graph, etc.)
have been\nstudied intensively with a view to understanding their
evolution.\nIn recent empirical work, Leskovec, Kleinberg, and
Faloutsos\nidentify two new and surprising properties of the evolution
of many real-world\nnetworks: densification (the ratio of edges to
vertices grows over time),\nand shrinking diameter (the diameter
reduces over time to a constant).\nThese properties run counter to
conventional wisdom, and are certainly\ninconsistent with graph models
prior to their work.\n\nIn this paper, we present the first model that
provides a simple, realistic,\nand mathematically tractable generative
model that intrinsically explains\nall the well-known properties of
the social networks, as well as\ndensification and shrinking
diameter.\nOur model is based on ideas studied empirically in the
social sciences,\nprimarily on the groundbreaking work of Breiger
(1973) on bipartite models of\nsocial networks that capture the
affiliation of agents to societies.\n\nWe also present algorithms that
harness the structural consequences of our\nmodel. Specifically, we
show how to overcome the bottleneck of densification\nin computing
shortest paths between vertices by producing sparse\nsubgraphs that
preserve or approximate shortest distances to all or\na distinguished
subset of vertices. This is a rare example of an algorithmic\nbenefit
derived from a realistic graph model.\n\nFinally, our work also
presents a modular approach to connecting random graph\nparadigms
(preferential attachment, edge-copying, etc.) to structural
consequences\n(heavy-tailed degree distributions, shrinking diameter,
etc.).
Finding, Minimizing, and Counting Weighted Subgraphs
Virginia Vassilevska and Ryan Williams
For a pattern graph H on k nodes, we consider the problems of finding
and counting the number of (not necessarily induced) copies of H in a
given large graph G on n nodes, as well as finding minimum weight
copies in both node-weighted and edge-weighted graphs. Our results
include:
-- The number of copies of an H with an independent set of size s can
be computed exactly in O*(2^s n^{k-s+3}) time. A minimum weight copy
of such an H (with arbitrary real weights on nodes and edges) can be
found in O*(4^{s+o(s)}n^{k-s+3}) time. (The O* notation omits poly(k)
factors.) These algorithms rely on fast algorithms for computing the
permanent of a k n matrix, over rings and semirings.
-- The number of copies of any H having minimum (or maximum)
node-weight (with arbitrary real weights on nodes) can be found in
O(n^{wk/3} + n^{2k/3+o(1)}) time, where w < 2.4 is the matrix
multiplication exponent and k is divisible by 3. Similar results hold
for other values of k. Also, the number of copies having exactly a
prescribed weight can be found within this time. These algorithms
extend the technique of Czumaj and Lingas (SODA 2007) and give a new
(algorithmic) application of multiparty communication complexity.
-- Finding an edge-weighted triangle of weight exactly 0 in general
graphs requires (n^{2.5-eps}) time for all eps > 0, unless the 3SUM
problem on N numbers can be solved in O(N^{2-delta}) time for some
delta > 0. This suggests that the edge-weighted problem is much harder
than its node-weighted version.
Online and Stochastic Survivable Network Design
Anupam Gupta and Ravishankar Krishnaswamy and R. Ravi
Consider the \\emph{edge-connectivity survivable network design}\n
problem: given a graph $G = (V,E)$ with edge-costs, and\n
edge-connectivity requirements $r_{ij} \\in \\Z_{\\geq 0}$ for every
pair\n of vertices $i, j \\in V$, find an (approximately) minimum-cost
network\n that provides the required connectivity. While this problem
is known\n to admit excellent approximation algorithms in the offline
case, no\n algorithms are known for this problem in the \\emph{online}
setting.\n (In fact, there are instances where for a input sequence of
length $L\n = O(\\log n)$, one can show a lower bound of $\\Omega(L)$,
in contrast\n to the $1$-connectivity case where $\\Theta(\\log
L)$-competitive\n algorithms exist.)\n \n In this paper, we give a
randomized $O(r_{\\max} \\log^3\n n)$-competitive online algorithm for
this edge-connectivity network\n design problem, where $r_{\\max} =
\\max_{ij} r_{ij}$. Apart from being\n the first results in this
direction, we feel that our techniques are\n non-standard and
interesting in their own right: somewhat\n surprisingly, we use the
standard embeddings of graphs into their\n random subtrees (i.e., into
\\emph{singly-connected subgraphs}) as a\n crucial step to get
algorithms for $k$-connectivity.\n\n Our results for the online
problem give us approximation\n algorithms that admit \\emph{strict}
cost-shares with the same\n strictness value. This, in turn, implies
approximation algorithms for\n (a)~the \\emph{rent-or-buy} version and
(b)~the (two-stage)\n \\emph{stochastic version} of the edge-connected
network design problem\n with independent arrivals. These are the
first approximation\n algorithms known for these versions of network
design problems with\n higher connectivity requirements. (The previous
results known are\n either for simple connectivity, or for
$k$-connectivity where all the\n demands share a common root
note.)\n\n We improve our results for the case when the underlying
graph\n is complete and the edge-costs are metric (i.e., satisfy the
triangle\n inequality). For this case, we can give $O(1)$-strict cost
shares,\n which gives constant-factor rent-or-buy and stochastic
algorithms for\n these instances.
Reconstruction for the Potts Model
Allan Sly
The reconstruction problem on the tree plays a key role in several
important computational problems. \nDeep conjectures in statistical
physics link the reconstruction problem to properties of random
constraint satisfaction problems including random $k$-SAT and random
colourings of random graphs. At this precise threshold the space of
solutions is conjectured to undergo a phase transition from a single
collected mass to exponentially many small clusters at which point
local search algorithm must fail. In computational biology the
reconstruction problem is central in phylogenetics. It has been shown
[Mossel '04] that solvability of the reconstruction problem is
equivalent to phylogenetic reconstruction with short sequences for the
binary symmetric model. \n\nRigorous reconstruction thresholds,
however, have only been established in a small number of models. We
confirm conjectures made by Mezard and Montanari for the Potts models
proving the first exact reconstruction threshold in a non-binary model
establishing the so-called Kesten-Stigum bound for the 3-state Potts
model on regular trees of large degree. We further establish that the
Kesten-Stigum bound is not tight for the $q$-state Potts model when $q
\\geq 5$. Moreover, we determine asymptotics for these reconstruction
thresholds.
Approximating Edit Distance in Near-Linear Time
Alexandr Andoni and Krzysztof Onak
We show how to compute the edit distance between two strings of
length\nn up to a factor of 2^{O(\\sqrt{\\log n}\\log\\log n)} in
n^{1+o(1)}\ntime. This is the first sub-polynomial approximation
algorithm for this problem\nthat runs in near-linear time, improving
on the state-of-the-art\nn^{1/3+o(1)} approximation. Previously,
approximation of\n2^{O(\\sqrt{\\log n\\log\\log n})} was known only
for {\\em embedding} edit\ndistance into \\ell_1, and it is not known
if that\nembedding can be computed in less than n^2 time.
Universally Utility-Maximizing Privacy Mechanisms
Arpita Ghosh and Tim Roughgarden and Mukund Sundararajan
A mechanism for releasing information about a statistical
database\nwith sensitive data must resolve a trade-off between utility
and\nprivacy. Informally, publishing fully accurate information
maximizes utility\nwhile minimizing privacy, while publishing random
noise accomplishes\nthe opposite. Formally, privacy can be rigorously
quantified using the\nframework of {\\em differential privacy}, which
requires that a\nmechanism's output distribution is nearly the same
(in a strong sense)\nwhether or not a given database row is included
or excluded. Thus\nfar, (dis)utility has typically been quantified
via the expected error\nof a (randomly perturbed) response to a
query.\n\nIn this paper, we pursue much stronger and more general
utility\nguarantees. Just as differential privacy guarantees
protection\nagainst every potential attacker, independent of its side
information,\nwe seek a mechanism that guarantees near-optimal utility
to every\npotential user, independent of its side information.
Formally,\nwe model the side information of a potential user as a
prior\ndistribution over query results. An interaction between such a
user\nand a mechanism induces a posterior distribution -- the utility
of the \nmechanism for this user is defined as the accuracy of
this\nposterior as quantified via a user-specific loss function.
A\ndifferentially private mechanism $M$ is (near-)optimal for a
given\nuser $u$ if $u$ derives (almost) as much utility from $M$ as
from any\nother differentially private mechanism.\n\nWe prove that for
each fixed counting query and differential privacy\nlevel, there is a
{\\em geometric mechanism} $M^*$ --- a discrete\nvariant of the
well-studied Laplace mechanism --- that is {\\em\nsimultaneously
utility-maximizing} for every possible user, subject to\nthe
differential privacy constraint. This is an extremely strong\nutility
guarantee: {\\em every} potential user $u$, no matter what\nits side
information and loss function, derives as much utility\nfrom $M^*$ as
from interacting with a differentially private\nmechanism $M_u$ that
is optimally tailored to $u$. The first part of\nour proof
characterizes the utility-maximizing differentially private\nmechanism
for a fixed but arbitrary user in terms of a basic feasible\nsolution
to a linear program with constraints that encode
differential\nprivacy. The second part shows that all of the relevant
vertices of\nthis polytope (ranging over all\npossible users) are
derivable from the geometric mechanism via\nsuitable remappings of its
range.
Integrality Gaps for Sherali-Adams Relaxations
Moses Charikar and Konstantin Makarychev and Yury Makarychev
We prove strong lower bounds on integrality gaps of Sherali--Adams
relaxations for MAX CUT, Vertex Cover, Sparsest Cut and other
problems. Our constructions show gaps for Sherali-Adams relaxations
that survive $n^delta$ rounds of lift and project. For MAX CUT and
Vertex Cover, these show that even $n^delta$ rounds of Sherali-Adams
do not yield a better than (2-epsilon) approximation.\n\nThe main
combinatorial challenge in constructing these gap examples is the
construction of a fractional solution that is far from an integer
solution, but yet admits consistent distributions of local solutions
for all small subsets of variables. Satisfying this consistency
requirement is one of the major hurdles to constructing Sherali-Adams
gap examples. We present a modular recipe for achieving this, building
on previous work on metrics with a local-global structure. We develop
a conceptually simple geometric approach to constructing Sherali-Adams
gap examples via constructions of consistent local SDP
solutions.\n\nThis geometric approach is surprisingly versatile. We
construct Sherali-Adams gap examples for Unique Games based on our
construction for MAX CUT together with a parallel repetition like
procedure. This in turn allows us to obtain Sherali-Adams gap examples
for any problem that has a Unique Games based \nhardness result (with
some additional conditions on the reduction from Unique Games). Using
this, we construct (2-epsilon) gap examples for Maximum Acyclic
Subgraph that rules out any family of linear constraints with support
at most $n^delta$.
On Oblivious PTAS for Nash Equilibrium
Constantinos Daskalakis and Christos H. Papadimitriou
If a class of games is known to have a Nash equilibrium with
probability values that are either zero or Omega(1) --- and thus with
support of bounded size --- then obviously this equilibrium can be
found exhaustively in polynomial time. Somewhat surprisingly, we show
that there is a PTAS for the class of games whose equilibria are
guaranteed to have small --- O(1/n) --- values, and therefore large
--- Omega(n) --- supports. We also point out that there is a PTAS for
games with sparse payoff matrices, a family for which the exact
problem is known to be PPAD-complete [Chen,Deng,Teng '06]. Both
algorithms are of a special kind that we call oblivious: The algorithm
just samples a fixed distribution on pairs of mixed strategies, and
the game is only used to determine whether the sampled strategies
comprise an epsilon-Nash equilibrium; the answer is "yes" with inverse
polynomial probability (in the second case, the algorithm is actually
deterministic). These results bring about the question:Is there an
oblivious PTAS for finding a Nash equilibrium in\ngeneral games? We
answer this question in the negative; our lower\nbound comes close to
the quasi-polynomial upper bound of [Lipton Markakis, Mehta
'03].\n\nAnother recent PTAS for anonymous games [Daskalakis,
Papadimitriou '07, '08, Daskalakis '08] is also oblivious in a weaker
sense appropriate for this class of games (it samples from a fixed
distribution on unordered collections of mixed strategies), but its
running time is exponential in 1/epsilon. We prove\nthat any oblivious
PTAS for anonymous games with two strategies and three player types
must have (1 / epsilon^alpha) in the exponent of the running time for
some alpha >= 1/3, rendering the algorithm in [Daskalakis '08] (which
works with any bounded number of player types) essentially optimal
within oblivious algorithms. In contrast, we devise a poly(n)*(1/
epsilon)^O(log^2(1/epsilon)) non-oblivious PTAS for anonymous games
with two strategies and any bounded number of player types. The key
idea of our algorithm is to search not over unordered sets of mixed
strategies, but over a carefully crafted set of collections of the
first O(log(1/ epsilon)) moments of the distribution of the number of
players playing strategy 1 at equilibrium. The algorithm works
because of a probabilistic result of more general interest that we
prove: the total variation distance between two sums of independent
indicator random variables decreases exponentially with the number of
moments of the two sums that are equal, independent of the number of
indicators.
Fully Homomorphic Encryption Using Ideal Lattices
Craig Gentry
We propose a fully homomorphic encryption scheme -- i.e., a scheme
that would allow one to evaluate circuits over encrypted data without
access to the decryption function. First, we provide a general
preliminary result -- that, to construct an encryption scheme that
permits evaluation of {\em arbitrary circuits}, it suffices to
construct an encryption scheme that can evaluate (slightly augmented
versions of) its {\em own decryption circuit}; we call a scheme that
can evaluate its (augmented) decryption circuit {\em bootstrappable}.
Next, we provide a bootstrappable public key encryption scheme using
{\em ideal lattices}.
Intuitively, ideal lattices are well-suited to construct
bootstrappable encryption for two reasons. First, lattice-based
cryptosystems typically have decryption algorithms with low circuit
complexity, often dominated by an inner product computation that is in
NC1. By contrast, exponentiation as used in RSA or ElGamal decryption
is not even known to be in NC. Second, we show that {\em ideal}
lattices provide both additive and multiplicative homomorphisms
(modulo a public-key ideal in a polynomial ring that is represented as
a lattice), as needed to evaluate general circuits.
Unfortunately, our initial scheme is not quite bootstrappable -- i.e.,
the depth that the scheme can correctly evaluate can be logarithmic
in the lattice dimension, just like the depth of the decryption
circuit, but the latter is greater than the former. However, we show
how to modify the scheme to reduce the depth of the decryption
circuit, and thereby obtain a bootstrappable encryption scheme,
without reducing the depth that the scheme can evaluate. Abstractly,
we accomplish this by enabling the {\em encrypter} to start the
decryption process, leaving less work for the decrypter, much like
the server leaves less work for the decrypter in a server-aided
cryptosystem. In more detail, we include a set of vectors in the
public key that includes a hidden low-weight subset that is related
to the secret key. This allows us to re-express decryption as a
low-weight subset sum rather than a full matrix-vector product, where
the former requires a circuit of lower depth.
Random walks on polytopes and an affine interior point method for linear programming.
Ravi Kannan and Hariharan Narayanan
Let $P$ be a polytope in $\\R^n$ defined by $m$ linear
inequalities.\nWe give a new Markov Chain algorithm to draw a nearly
uniform sample\nfrom $P$; the running time is strongly polynomial
(this is the first\nalgorithm with this property) given a ``central''
starting point\n$x_0$. If $s$ is the supremum over all chords
$\\overline{pq}$\npassing through $x_0$ of $\\frac{|p-x_0|}{|q-x_0|}$
and $\\epsilon$ is\nan upper bound on the total variation distance
from the uniform we\nwant, it is sufficient to take $O\\left(m
n\\left( n\\log (s m) + \\log\n\\frac{1}{\\epsilon}\\right)\\right)$
steps of the random walk. We use\nthis result to design an affine
interior point algorithm that does a\n{\\it single} random walk to
solve linear programs approximately.\nMore precisely, suppose $Q =
\\{z \\big | Bz \\leq \\one\\}$ contains a\npoint $z$ such that $c^T z
\\geq d$ and $r := \\sup_{z \\in Q} \\|Bz\\| +\n1$, where $B$ is an $m
\\times n$ matrix. Then, after $\\tau =\nO\\left(mn \\left(n
\\ln\\left(\\frac{mr}{\\eps}\\right) +
\\ln\n\\frac{1}{\\delta}\\right)\\right)$ steps, the random walk is at
a point\n$x_\\tau$ such that $c^T x_\\tau \\geq d(1-\\eps)$ with
probability\ngreater than $1-\\delta$. The fact that this algorithm
has polynomial\nrun-time is notable since the corresponding
deterministic affine\nalgorithm analyzed by Dikin has no known
polynomial guarantees.
When and How Can Data be Efficiently Released with Privacy?
Cynthia Dwork and Moni Naor and Omer Reingold and Guy Rothblum and Salil Vadhan
We consider private data analysis in the setting in which a
trusted\nand trustworthy curator, having obtained a large data set
containing\nprivate information, releases to the public a
``sanitization'' of the\ndata set that simultaneously protects the
privacy of the individual\ncontributors of data and offers utility to
the data analyst. The\nsanitization may be in the form of an
arbitrary data structure,\naccompanied by a computational procedure
for determining approximate\nanswers to queries on the original data
set, or it may be a\n``synthetic data set'' consisting of data items
drawn from the same\nuniverse as items in the original data set;
queries are carried out as\nif the synthetic data set were the actual
input. In either case the\nprocess is non-interactive; once the
sanitization has been released\nthe original data and the curator play
no further role.\n\nBlum {\\it et al.} (STOC `08) showed the
remarkable result that, for\nany any set $X$ of potential data items
and any ``concept'' class $\\C$\nof functions $f : X\\rightarrow
\\{0,1\\}$, the {\\em exponential\nmechanism} of McSherry and Talwar
(FOCS `07) can be used to\n(inefficiently) generate a synthetic data
set that maintains\napproximately correct fractional counts for all
concepts in $\\C$,\nwhile ensuring a strong privacy guarantee.\n\nIn
this work we investigate the computational complexity
of\nnon-interactive privacy mechanisms, mapping the boundary
between\nfeasibility and infeasibility. Let $\\secp$ be a
computation\nparameter. We
show\n\\begin{enumerate}\n\\item\n\\label{alg} When $|\\C|$ and $|X|$
are both polynomial in $\\secp$,\nit is possible to efficiently (in
$\\secp$) and privately construct\nsynthetic data sets maintaining
approximately correct counts, even\nwhen the original data set is very
small (roughly,\n$O(2^{\\sqrt{\\log|\\C|}}\\log|X|)$).\n\\item\nWhen
either $|\\C|$ or $|X|$ is superpolynomial in $\\secp$ there
exist\ndistributions on data sets and a choice of $\\C$ for which,
assuming\nthe existence of one-way functions, there is no efficient
private\nconstruction of a synthetic data set maintaining
approximately correct\ncounts.\n\\item\nTurning to the potentially
easier problem of privately generating a\ndata structure from which it
is possible to approximate counts, there\nis a tight connection
between hardness of sanitization and the\nexistence of {\\em traitor
tracing} schemes, a method of content\ndistribution in which (short)
key strings are assigned to subscribers\nin a way that, given any
useful ``pirate'' key string constructed by a\ncoalition of malicious
subscribers, it is possible to identify at\nleast one colluder. Using
known schemes, we obtain a distribution on\ndatabases and a concept
class permitting inefficient sanitization (via\nBlum {\\it et al.}),
but for which no efficient sanitization is\npossible (under certain
complexity assumptions).\n\\end{enumerate}\n\n\\noindent Our
algorithmic results give for the first time insight into\nthe value of
synthetic data sets beyond that of general data\nstructures.
On Cryptography with Auxiliary Input
Yevgeniy Dodis and Yael Tauman Kalai and Shachar Lovett
We study the question of designing cryptographic schemes which
are\nsecure even if an *arbitrary* function $f(sk)$ of the secret
key\nis leaked, as long as the secret key $sk$ is still
(exponentially)\nhard to compute from this auxiliary input. We note
that this\nassumption on the auxiliary input is weaker than the more
traditional\nassumption that $sk$ has high min-entropy, and applies
even to\nsituations where $f(sk)$ *information-theoretically*
determines\nthe entire secret key $sk$.\n\nAs our main result, we
construct CPA/CCA secure symmetric encryption\nschemes that remain
secure with exponentially hard-to-invert auxiliary\ninput. We give
several applications of such schemes.\n(1) We construct an
average-case obfuscator for the class of point functions,\nwhich
remains secure with exponentially hard-to-invert auxiliary\ninput, and
is reusable.\n(2) We construct a *reusable* and *robust*
extractor\nthat remains secure with exponentially hard-to-invert
auxiliary input.\n\nOur results rely on the existence of trapdoor
permutations, and on a\nnew cryptographic assumption, which is a
generalized version of the\nlearning parity-with-noise (LPN)
assumption. We stress that this\nassumption is ``efficiently
falsifiable'' and does not quantify over\nall auxiliary functions $f$.
Polynomial-time Theory of Matrix Groups
L\\'aszl\\'o Babai and Robert M. Beals and \\'Akos Seress
It has been known since 1980 that basic manipulation of permutation
groups, given by a list of generators, can be performed in polynomial
time. In contrast, for matrix groups over finite fields, these
problems present great difficulties. For starters, the one-by-one case
requires factoring integers and discrete logarithms; so we cannot hope
to avoid these obstacles in the general case.\n\nAfter 25 years of
increasingly intensive study by the theoretical computer science and
computational group theory communities, we have finally arrived at a
definitive result on the basic questions: If G is a matrix group over
a finite field of odd order then the order of G and membership in G
can be determined in randomized polynomial time, using oracles for
factoring and discrete log.\n\nPreviously similar results were known
for solvable matrix groups only (Luks, FOCS 1992).\n\nWe are also able
to claim that the obstacles to polynomial-time computability are
purely "abelian:" If a matrix group G has no abelian normal subgroups
then we can determine the order of G, and name all of its composition
factors, in genuine randomized polynomial time (no oracles). This
yields a natural problem in BPP not known to be in RP or coRP.\n\nThe
overall procedure combines new elementary algorithms with a large body
of previous work, algorithmic as well as group theoretic. Some of the
ingredients depend on detailed knowledge of the classification of
finite simple groups. However, no such knowledge will be required for
reading this paper.
Message Passing Algorithms and Improved LP Decoding
Sanjeev Arora and Constantinos Daskalakis and David Steurer
Linear programming decoding for low-density parity check codes (and
related domains such as compressed sensing) has received increased
attention over recent years because of its practical performance
(coming close to that of iterative decoding algorithms) and its
amenability to finite-blocklength analysis. Several works starting
with the work of Feldman et al. showed how to analyze LP decoding
using properties of expander graphs. This line of analysis works for
only low error rates, about a couple of orders of magnitude lower than
the empirically observed performance. It is possible to do better for
the case of random noise, as shown by Daskalakis et al. and Koetter
and Vontobel. \n\n\n\nBuilding on work of Koetter and Vontobel, we
obtain a novel understanding of LP decoding which allows us to
establish a 0.047-fraction of correctable errors for rate-$1/2$ codes;
this comes very close to the performance of iterative decoders and
significantly higher than the best previously noted correctable bit
error rates. Unlike other techniques, our analysis considers directly
the primal linear program, and works by exploiting an explicit
connection between LP decoding, Sherali-Adams relaxations, and message
passing algorithms.
A Fast and Efficient Algorithm for Low-rank Approximation of a Matrix
Nam H. Nguyen and Thong T. Do and Trac D. Tran
Low-rank matrix approximation is to find a rank $k$ version of a $m
\\times n$ matrix $\\mat A$ such that it is as "close" to the best SVD
approximation version $\\mat A_k$ of $\\mat A$ at the same rank level
as possible. Previous approaches approximate matrix $\\mat A$ by
non-uniformly adaptive sampling some columns (or rows) of $\\mat A$,
hoping that this subset of columns contain enough information about
$\\mat A$. The sub-matrix is then used for the approximation
process. However, these approaches seem to be slow due to the
complexity in the adaptive sampling. In this paper, we propose a fast
and efficient algorithm which at first preprocess matrix $\\mat A$ in
order to spread out information (energy) of every columns (or rows) of
$\\mat A$, then uniformly randomly select some of its columns (or
rows). Finally, a rank-$k$ approximation is generated from the row
space of these selected set. The preprocessing step is performed by
uniformly randomizing signs of entries of $\\mat A$ and transforming
all columns of $\\mat A$ by an orthonormal matrix $\\mat F$ with fast
implementation (e.g. Hadamard, FFT, DCT ...) . Our main contribution
is summarized as follows\n\n1) We show that by uniformly selecting $d$
rows of the preprocessed matrix with $d = \\Order \\left(
\\frac{1}{\\eta} k \\log \\frac{2m}{\\beta} \\right)$, we guarantee
the relative Frobenius norm error approximation: $\\left(1 + \\eta
\\right) \\norm{A - A_k}_F$ with probability at least $1 -
5\\beta$.\n\n2) With $d$ above, we establish a spectral norm error
approximation: $\\left(2 + \\sqrt{\\frac{2m}{d}} \\right) \\norm{A -
A_k}_2$ with probability at least $1 - 2\\beta$.\n\n3) The algorithm
requires $2$ passes over the data and runs in time $ \\Order \\left(
mn \\log d + (m+n) d^2 \\right) $ which, as far as our knowledge, is
one of the fastest algorithms when the matrix $\\mat A$ is dense.
Short seed extractors against quantum storage
Amnon Ta-Shma
In the classical privacy amplification problem Alice and Bob share
\ninformation that is only partially secret towards an eavesdropper
\nCharlie. Their goal is to distill this information to a shorter
string \nthat is completely secret. The classical privacy
amplification problem \ncan be solved almost optimally using
extractors. An interesting variant \nof the problem, where the
eavesdropper Charlie is allowed to keep \nquantum information rather
than just classical information, was \nintroduced by Konig, Maurer and
Renner. In this setting, the \neavesdropper Charlie may entangle
himself with the input (without \nchanging it) and the only limitation
Charlie has is that it may keep at \nmost b qubits of storage. A
natural question is whether there are \nclassical extractors that are
good even against quantum storage.\n\nRecent work has shown that some
classical extractors miserably fail \nagainst quantum storage. At the
same time, it was shown that some other \nclassical extractors work
well even against quantum storage, but all \nthese extractors had a
large seed length that was either as large as the \nextractor output,
or as large as the quantum storage available to the
\neavesdropper.\n\nIn this paper we show that a modified version of
Trevisan's extractor is \ngood even against quantum storage, thereby
giving the first such \nconstruction with logarithmic seed length. The
technique we use is a \ncombination of Trevisan's approach of
constructing an extractor from a \nblack-box pseudorandom generator,
together with locally list-decodable \ncodes and previous work done on
quantum random access codes.