STOC 2011 Accepted Papers
(in order of submission)

Quantum OneWay Communication can be Exponentially Stronger Than Classical Communication

Bo'az Klartag and Oded Regev
Affiliations: Tel Aviv University and Tel Aviv University & CNRS, ENS, Paris
Abstract:
In STOC 1999, Raz presented a (partial)
function for which there is a quantum protocol communicating only
$O(\log n)$ qubits, but for which any classical (randomized,
boundederror) protocol requires $\poly(n)$ bits of communication. That
quantum protocol requires two rounds of communication. Ever since Raz's
paper it was open whether the same exponential separation can be
achieved with a quantum protocol that uses only one round of
communication. Here we settle this question in the affirmative.
Nicolas Bousquet and Jean Daligault and St�phan Thomass\'e
Affiliations: LIRMM, 161 rue Ada, 34392 Montpellier Cedex 5  France
Abstract:
Let $G=(V,E)$ be a graph and $R$ be a set of pairs
of elements of $V$ called \emph{requests}. A \emph{multicut} is a subset
$F$ of $E$ such that every request $xy$ of $R$ is cut by $F$, i.e.
every $xy$path of $G$ intersects $F$. We show that there exists an
$O(f(k)n^c)$ algorithm which decides, given a graph on $n$ vertices, a
set of request and an integer $k$, if there exists a multicut of size at
most $k$.

Pareto Optimal Solutions for Smoothed Analysts

Ankur Moitra and Ryan O'Donnell
Affiliations: Massachusetts Institute of Technology and Carnegie Mellon University
Abstract:
Consider an optimization problem with $n$
binary variables and $d+1$ linear objective functions. Each valid
solution $x \in \{0,1\}^n$ gives rise to an objective vector in
$\R^{d+1}$, and one often wants to enumerate the Pareto optima among
them. In the worst case there may be exponentially many Pareto optima;
however, it was recently shown that in (a generalization of) the
smoothed analysis framework, the expected number is polynomial in~$n$.
Unfortunately, the bound obtained had a rather bad dependence on $d$;
roughly $n^{d^d}$. In this paper we show a significantly improved bound
of~$n^{2d}$.
Our proof is based on analyzing two algorithms. The first algorithm, on
input a Pareto optimal $x$, outputs a ``testimony'' containing clues
about $x$'s objective vector, $x$'s coordinates, and the region of space
$B$ in which $x$'s objective vector lies. The second algorithm can be
regarded as a {\em speculative} execution of the first  it can
uniquely reconstruct $x$ from the testimony's clues and just \emph{some}
of the probability space's outcomes. The remainder of the probability
space's outcomes are just enough to bound the probability that $x$'s
objective vector falls into the region~$B$.

An Optimal Lower Bound on the Communication Complexity of GapHammingDistance

Amit Chakrabarti and Oded Regev
Affiliations: Dartmouth College and Tel Aviv University & CNRS, ENS, Paris
Abstract:
We prove an optimal $\Omega(n)$ lower bound on
the randomized communication complexity of the muchstudied
\textsc{GapHammingDistance} problem. As a consequence, we obtain
essentially optimal multipass space lower bounds in the data stream
model for a number of fundamental problems, including the estimation of
frequency moments.
The \textsc{GapHammingDistance} problem is a communication problem,
wherein Alice and Bob receive $n$bit strings $x$ and $y$, respectively.
They are promised that the Hamming distance between $x$ and $y$ is
either at least $n/2+\sqrt{n}$ or at most $n/2\sqrt{n}$, and their goal
is to decide which of these is the case. Since the formal presentation
of the problem by Indyk and Woodruff (FOCS, 2003), it had been
conjectured that the na\"ive protocol, which uses $n$ bits of
communication, is asymptotically optimal. The conjecture was
shown to be true in several special cases, e.g., when the
communication is deterministic, or when the number of rounds of
communication is limited.
The proof of our aforementioned result, which settles this conjecture
fully, is based on a new geometric statement regarding correlations in
Gaussian space, related to a result of C.~Borell (1985). To prove this
geometric statement, we show that random projections of nottoosmall
sets in Gaussian space are close to a mixture of translated normal
variables.

Constant Round NonMalleable Protocols using One Way Functions

Vipul Goyal
Affiliations: Microsoft Research, India
Abstract:
We provide the first constant round
constructions of nonmalleable commitment and zeroknowledge protocols
based only one oneway functions. This improves upon several previous
(incomparable) works which required either: (a) superconstant number of
rounds, or, (b) nonstandard or subexponential hardness assumptions,
or, (c) nonblackbox simulation and collision resistant hash functions.
These constructions also allow us to obtain the first constant round
multiparty computation protocol by relying only on the existence of
constant round oblivious transfer protocols.
A simple modification of our commitment scheme
gives a construction which makes use of the underlying oneway function
in a blackbox way. The modified construction satisfies a slightly
weaker (yet natural) notion of nonmalleability which still suffices to
obtain a (fully) blackbox multiparty computation protocol. This allows
us to obtain a constant round multiparty computation protocol making
only a blackbox use of the standard cryptographic primitives with
polynomialtime hardness.

Fixedparameter tractability of multicut parameterized by the size of the cutset

D�niel Marx and Igor Razgon
Abstract:
Given an undirected graph $G$, a collection
$\{(s_1,t_1), \dots, (s_k,t_k)\}$ of pairs of vertices, and an integer
${\size}$, the Edge Multicut problem ask if there is a set $S$ of at
most $p$ edges such that the removal of $S$ disconnects every $s_i$ from
the corresponding $t_i$. Vertex Multicut is the analogous problem where
$S$ is a set of at most $p$ vertices. Our main result is that both
problems can be solved in time $2^{O(p^3)}\cdot n^{O(1)}$, i.e.,
fixedparameter tractable parameterized by the size $p$ of the
cutset in the solution. By contrast, it is unlikely that an algorithm
with running time of the form $f(p)\cdot n^{O(1)}$ exists for the
directed version of the problem, as we show it to be W[1]hard
parameterized by the size of the cutset.

Secure Computation with Information Leaking to an Adversary

Miklos Ajtai
Affiliations: IBM Almaden Research Center
Abstract:
Assume that Alice is running a program $P$ on a
RAM $M$, and an adversary Bob would like to get some information about
the input of the program. Sometimes Bob is able to see the contents and
addresses of the registers involved in the instruction which is
executed. We will call a time when this happens a compromised time.
Bob can choose the compromised times in an adaptive way, that is,
immediately before the instruction at time $t$ is executed, Bob, using
all of the information at his disposal, can decide whether time $t$ will
be compromised or not. The only restriction on his choice is, that
among $m$ consecutive instructions there can be at most $\epsilon m$
whose time is compromised, where $\epsilon>0$ is a small constant.
Such an adversary will be called an $(\epsilon,m)$moderate adversary.
Alice wants to replace program $P$ with a program $P'$ with the same
input/output, and which has the property that with high probability Bob
will not gain any information about the input and the output of $P$. We
assume that the total time used by $P$ and its total memory requirement
and the times of the input and output instructions are fixed in advance
and are known to Bob. All of these knowledge together will be called
the conceded information. So the requirement on $P'$ is that the
adversary Bob, with high probability, will not gain any nonconceded
information. Of course such a $P'$ does not exists if the input arrives
in its original form, because Bob, choosing the compromised times at
random, can see each input bit with a probability of $\epsilon$.
Therefore we assume that the program $P'$ gets its input in an encoded
form, namely each input bit $b$ is encoded by a random $0,1$sequence of
length $m$ whose parity is $b$. The output bits of $P'$ are encoded in
a similar way. We also assume that the total memory requirement of $P$
is $n$, $\log n \le m \le n$, and there is a bound on the total time
$T$ used by $P$, e.g., $T$ is polynomial in $n$.
We show that, under these assumptions, each program $P$ can be replaced
by an input/output equivalent $P'$, so that with a probability of at
least $1e^{cm}$, where $c>0$ is a constant, Bob does not gain any
nonconceded information about the input and output of $P'$. The time
and space requirements of $P'$ is greater than the corresponding
requirements of $P$, only by a factor polynomial in $m$. Moreover such a
$P'$ can be chosen in a way that it is oblivious. More precisely, with
high probability, Bob will not gain any nonconceded information about
the input even if he, in addition to the previously described
information, knows at each time which instruction has been executed and
which memory cells were accessed by this instruction, and can use this
extra information to the adaptive choices of compromised times.

Optimal ConstantTime Approximation Algorithms and (Unconditional) Inapproximability Results for Every BoundedDegree CSP

Yuichi Yoshida
Affiliations: Kyoto University and Preferred Infrastructure
Abstract:
Raghavendra (STOC 2008) gave an elegant and
surprising result: if Khot's Unique Games Conjecture (STOC 2002) is
true, then for every constraint satisfaction problem (CSP), the best
approximation ratio is attained by a certain simple semidefinite
programming and a rounding scheme for it.
In this paper, we show that similar results hold for constanttime
approximation algorithms in the boundeddegree model. Specifically, we
present the followings: (i) For every CSP, we construct an oracle that
serves an access, in constant time, to a nearly optimal solution to a
basic LP relaxation of the CSP. (ii) Using the oracle, we give a
constanttime rounding scheme that achieves an approximation ratio
coincident with the integrality gap of the basic LP. (iii) Finally, we
give a generic conversion from integrality gaps of basic LPs to hardness
results. All of those results are unconditional. Therefore, for every
boundeddegree CSP, we give the best constanttime approximation
algorithm among all.
A CSP instance is called $\epsilon$far from satisfiability if we must
remove at least an $\epsilon$fraction of constraints to make it
satisfiable. A CSP is called testable if there is a constanttime
algorithm that distinguishes satisfiable instances from $\epsilon$far
instances with probability at least $2/3$. Using the results above, we
also derive, under a technical assumption, an equivalent condition under
which a CSP is testable in the boundeddegree model.

NearOptimal Private Approximation Protocols via a BlackBox Transformation

David P. Woodruff
Affiliations: IBM ResearchAlmaden
Abstract:
We show the following general
transformation: any twoparty protocol for outputting a
(1+eps)approximation to f(x,y) = sum_{j=1}^n g(x_j, y_j) with
probability at least 2/3, for any nonnegative efficienty computable
function g, can be compiled into a twoparty private approximation
protocol with only a polylogarithmic factor loss in communication,
computation, and round complexity. In general it is insufficient to use
secure function evaluation or fully homomorphic encryption on a
standard, nonprivate protocol for approximating f. This is because the
approximation may reveal information about x and y that does not follow
from $(x,y). Applying our transformation and variations of it, we obtain
nearoptimal private approximation protocols for a wide range of
problems in the data stream literature for which previously nothing was
known. We give nearoptimal private approximation protocols for the
l_pdistance for every p >= 0, for the heavy hitters and importance
sampling problems with respect to any l_pnorm, for the maxdominance
and other dominant l_pnorms, for the distinct summation problem, for
entropy, for cascaded frequency moments, for subspace approximation and
block sampling, and for measuring independence of datasets. Using a
result for data streams, we obtain private approximation protocols with
polylogarithmic communication for every nondecreasing and symmetric
function g(x_j,y_j) = h(x_jy_j) with at most quadratic growth. If the
original (nonprivate) protocol is a simultaneous protocol, e.g., a
sketching algorithm, then our only cryptographic assumption is efficient
symmetric computationallyprivate information retrieval; otherwise it
is fully homomorphic encryption. For all but one of these problems, the
original protocol is a sketching algorithm. Our protocols generalize
straightforwardly to more than two parties.

Cover times, blanket times, and majorizing measures

Jian Ding and James R. Lee and Yuval Peres
Abstract:
We exhibit a strong connection between cover
times of graphs, Gaussian processes, and Talagrand's theory of
majorizing measures. In particular, we show that the cover time of any
graph G, is equivalent, up to universal constants, to the square of the
expected maximum of the Gaussian free field on G, scaled by the number
of edges in G.
This allows us to resolve a number of open questions. We give a
deterministic polynomialtime algorithm that computes the cover time to
within an $O(1)$ factor for any graph, answering a question of Aldous
and Fill (1994). We also positively resolve the blanket time conjectures
of Winkler and Zuckerman (1996), showing that for any graph, the
blanket and cover times are within an O(1) factor. The best previous
approximation factor for both these problems was O(\log \log n)^2 for
nvertex graphs, due to Kahn, Kim, Lovasz, and Vu (2000).

Breaking the $k^2$ barrier for explicit RIP matrices

Jean Bourgain, S. J. Dilworth, Kevin Ford, Sergei Konyagin, Denka Kutzarova
Affiliations: Institute For Advanced
Study, University of South Carolina, University of Illinois, Steklov
Mathematical Institute, Bulgarian Academy of Sciences/University of
Illinois
Abstract:
We give a new explicit construction of $n\times N$
matrices satisfying the Restricted Isometry Property (RIP). Namely, for
some $\epsilon>0$, large $k$ and $k^{2\epsilon} \le N\le
k^{2+\epsilon}$, we construct RIP matrices of order $k$ with
$n=O(k^{2\epsilon})$. This overcomes the natural barrier $n=O(k^2)$ for
proofs based on small coherence, which are used in all previous
explicit constructions of RIP matrices. Key ingredients in our proof
are new estimates for sumsets in product sets and for exponential sums
with the products of sets possessing special additive structure.

Analyzing Network Coding Gossip Made Easy

Bernhard Haeupler
Affiliations: MIT
Abstract:
We give a new technique to analyze the
stopping time of gossip protocols that are based on random linear
network coding (RLNC). Our analysis drastically simplifies, extends and
strengthens previous results. We analyze RLNC gossip in a general
framework for network and communication models that encompasses and
unifies the models used previously in this context. We show, in most
settings for the first time, that it converges with high probability in
the informationtheoretically optimal time. Most stopping times are of
the form O(k + T) where k is the number of messages to be distributed
and T is the time it takes to disseminate one message. This means RLNC
gossip achieves ``perfect pipelining''.
Our analysis directly extends to highly dynamic networks in which the
topology can change completely at any time. This remains true even if
the network dynamics are controlled by a fully adaptive adversary that
knows the complete network state. Virtually nothing besides simple O(kT)
sequential flooding protocols was previously known for such a setting.
While RLNC gossip works in this wide variety of networks its analysis
remains the same and extremely simple. This contrasts with more complex
proofs that were put forward to give less strong results for various
special cases.

Deterministic Construction of a high dimensional $\ell_p$ section in $\ell_1^n$ for any $p<2$

Zohar S. Karnin
Affiliations: Technion
Abstract:
For any $0<r<p<2$, and $\epsilon>0$,
we give an efficient deterministic construction of a linear subspace $V
\subseteq R^n$, of dimension $(1\epsilon)n$ in which the $\ell_p$ and
$\ell_r$ norms are the same up to a multiplicative factor of
$poly(\epsilon^{1})$ (after the correct normalization). As a corollary
we get a deterministic compressed sensing algorithm (Base Pursuit) for a
new range of parameters. In particular, for
any constant $\epsilon>0$ and $p<2$, we obtain a linear operator
$A:R^n \rightarrow R^{\epsilon n}$ with the $\ell_1/\ell_p$ guarantee
for $(n \cdot poly(\epsilon))$sparse vectors. Namely, let $x$ be a
vector in $R^n$ whose $\ell_1$ distance from a $k$sparse vector (for
some $k=n \cdot poly(\epsilon)$) is $\delta$. The algorithm, given $Ax$
as input, outputs an $n$ dimensional vector $y$ such that $xy_p
\leq \delta
k^{1/p1}$. In particular this gives a weak form of the $\ell_2/\ell_1$ guarantee.
Our construction has the additional benefit that when viewed as a
matrix, $A$ has at most $O(1)$ nonzero entries in each row. As a
result, both the encoding (computing $Ax$) and decoding (retrieving $x$
from $Ax$) can be computed efficiently.

Schaefer's Theorem for Graphs

Manuel Bodirsky and Michael Pinsker
Affiliations: CNRS/Ecole Polytechnique and Equipe Logique Mathematique, Paris 7
Abstract:
Schaefer's theorem is a complexity
classification result for socalled Boolean constraint satisfaction
problems: it states that every Boolean constraint satisfaction problem
is either contained in one out of six classes and can be solved in
polynomial time, or is NPcomplete. We present an analog of this
dichotomy result for the firstorder logic of graphs instead of Boolean
logic. In this generalization of Schaefer's result, the input consists
of a set W of variables and a conjunction Phi of statements
(``constraints'') about these variables in the language of graphs, where
each statement is taken from a fixed finite set Psi of allowed
formulas; the question is whether Phi is satisfiable in a graph. We
prove that either Psi is contained in one out of 17 classes of graph
formulas and the corresponding problem can be solved in polynomial time,
or the problem is NPcomplete. This is achieved by a
universalalgebraic approach, which in turn allows us to use structural
Ramsey theory. To apply the universalalgebraic approach, we formulate
the computational problems under consideration as constraint
satisfaction problems (CSPs) whose templates are firstorder definable
in the countably infinite random graph. Our method to then classify the
computational complexity of those CSPs produces many statements of
independent mathematical interest.

Tight Bounds for Parallel Randomized Load Balancing

Christoph Lenzen and Roger Wattenhofer
Affiliations: Computer Engineering and Networks Laboratory (TIK), ETH Zurich
Abstract:
We explore the fundamental limits of distributed
ballsintobins algorithms, i.e., algorithms where balls act in
parallel, as separate agents. This problem was introduced by Adler et
al., who showed that nonadaptive and symmetric algorithms cannot
reliably perform better than a maximum bin load of Theta(log log n / log
log log n) within the same number of rounds. We present an adaptive
symmetric algorithm that achieves a bin load of two in log* n + O(1)
communication rounds using O(n) messages in total. Moreover, larger bin
loads can be traded in for smaller time complexities. We prove a
matching lower bound of (1o(1)) log* n on the time complexity of
symmetric algorithms that guarantee small bin loads at an asymptotically
optimal message complexity of O(n). The essential preconditions of the
proof are (i) a limit of O(n) on the total number of messages sent by
the algorithm and (ii) anonymity of bins, i.e., the port numberings of
balls are not globally consistent. In order to show that our technique
yields indeed tight bounds, we provide for each assumption an algorithm
violating it, in turn achieving a constant maximum bin load in constant
time.
As an application, we consider the following problem. Given a fully
connected graph of n nodes, where each node needs to send and receive up
to n messages, and in each round each node may send one message over
each link, deliver all messages as quickly as possible to their
destinations. We give a simple and robust algorithm of time complexity
O(log* n) for this task and provide a generalization to the case where
all nodes initially hold arbitrary sets of messages. Completing the
picture, we give a less practical, but asymptotically optimal algorithm
terminating within O(1) rounds. All these bounds hold with high
probability.

Pseudorandom Generators for Group Products

Michal Koucky and Prajakta Nimbhorkar and Pavel Pudlak
Affiliations: Institute of Mathematics, Prague and Chennai Mathematical Institute and Institute of Mathematics, Prague
Abstract:
We prove that the pseudorandom generator
introduced in Impagliazzo et al. (1994) with proper choice of
parameters fools group products of a given finite group G. The seed
length is O(log n (G^{O(1)} + log 1/delta)), where n is the length of
the word and delta is the allowed error. The result is equivalent to the
statement that the pseudorandom generator with seed length O(log n
(2^{O(w log w)} + log 1/delta)) fools readonce permutation branching
programs of width w. As an application of the pseudorandom generator one
obtains smallbias spaces for products over all finite groups (Meka and
Zuckerman, 2009).

Contraction Decomposition in HMinorFree Graphs and Algorithmic Applications

Erik Demaine and Mohammad Hajiaghayi and Kenichi Kawarabayashi
Affiliations: MIT and University of Maryland and National Institute of Informatics in Japan
Abstract:
We prove that any graph excluding a fixed minor
can have its edges partitioned into a desired number $k$ of color
classes such that contracting the edges in any one color class results
in a graph of treewidth linear in~$k$. This result is a natural finale
to research in contraction decomposition, generalizing previous such
decompositions for planar and boundedgenus graphs, and solving the main
open problem in this area (posed at SODA 2007). Our decomposition can
be computed in polynomial time, resulting in a general framework for
approximation algorithms, particularly PTASs (with $k \approx
1/\epsilon$), and fixedparameter algorithms,
for problems closed under contractions in graphs excluding a fixed
minor. For example, our approximation framework gives the first PTAS for
TSP in weighted $H$minorfree graphs, solving a decadeold open
problem of Grohe; and gives another fixedparameter algorithm for
$k$cut in $H$minorfree graphs, which was an open problem of Downey et
al.\ even for planar graphs.
To obtain our contraction decompositions, we develop new graph
structure theory to realize virtual edges in the cliquesum
decomposition by actual paths in the graph, enabling the use of the
powerful RobertsonSeymour Graph Minor decomposition theorem in the
context of edge contractions (without edge deletions).
This requires careful construction of paths to avoid blowup in the number of required paths beyond~$3$.
Along the way, we strengthen and simplify contraction decompositions
for boundedgenus graphs, so that the partition is determined by a
simple radial ball growth independent of handles, starting from a set of
vertices instead of just one, as long as this set is tight in a certain
sense. We show that this tightness property holds for a constant number
of approximately shortest paths in the surface, introducing
several new concepts such as dives and rainbows.

Correlation testing for affine invariant properties on $\F_p^n$ in the high error regime

Hamed Hatami and Shachar Lovett
Affiliations: McGill University and IAS
Abstract:
Recently there has been much interest in
Gowers uniformity norms from the perspective of theoretical computer
science. This is mainly due to the fact that these norms provide a
method for testing whether the maximum correlation of a function
$f:\mathbb{F}_p^n \rightarrow \mathbb{F}_p$ with polynomials of degree
at most $d \le p$ is nonnegligible, while making only a constant number
of queries to the function. This is an instance of {\em correlation
testing}. In this framework, a fixed test is applied to a function, and
the acceptance probability of the test is dependent on the correlation
of the function from the property. This is an analog of {\em proximity
oblivious testing}, a notion coined by Goldreich and Ron, in the high
error regime.
We study in this work general properties which are affine invariant and
which are correlation testable using a constant number of queries. We
show that any such property (as long as the field size is not too small)
can in fact be tested by the Gowers uniformity test, and hence having
correlation with the property is equivalent to having correlation with
degree $d$ polynomials for some fixed $d$. We stress that our result
holds also for nonlinear properties which are affine invariant. This
completely classifies affine invariant properties which are correlation
testable.
The proof is based on higherorder Fourier analysis, where we establish a
new approximate orthogonality for structures defined by linear forms.
In particular, this resolves an open problem posed by Gowers and Wolf.
Another ingredient is a nontrivial extension of a graph theoretical
theorem of Erd\"os, Lov\'asz and Spencer to the context of additive
number theory.

Every Property of Hyperfinite Graphs is Testable

Ilan Newman and Christian Sohler
Affiliations: University of Haifa and TU Dortmund
Abstract:
A property testing algorithm for a property $\Pi$
in the bounded degree graph model\cite{GR97} is an algorithm that, given
access to the adjacency list representation of a graph $G=(V,E)$ with
maximum degree at most $d$, accepts $G$ with probability at least
$2/3$, if $G$ has property $\Pi$ and rejects $G$ with probability at
least $2/3$, if it differs on more than $\epsilon dn $ edges from every
graph with property $\Pi$. A property is \emph{testable}, if for every
$\epsilon,d$ and $n$, there is a property testing algorithm
$A_{\epsilon,n,d}$ that makes at most $q(\epsilon,d)$ queries to the
graph, i.e. a nonuniform algorithm that makes only a constant number of
queries to the graph.
A $k$disc around a vertex $v$ of a graph $G=(V,E)$ is the subgraph
induced by all vertices of distance at most $k$ from $v$. We show that
the structure of a planar graph on $n\ge N(\epsilon,d)$ vertices and
with constant maximum degree $d$, is determined, up to the deletion of
$\epsilon d n$ edges, by the frequency of $K$discs for certain
$K=K(\epsilon,d)$ that is independent of the size of the graph. We can
replace planar graphs by any hyperfinite class of graphs, which
includes, for example, every graph class that does not contain a set of
forbidden minors.
We use this result to obtain new results and improve upon existing results in the area of property testing.
In particular, we prove that
\begin{itemize}
\item graph isomorphism is testable,
\item every graph property is testable for every class of hyperfinite graphs,
\item every hyperfinite graph property is testable,
\item A large class of graph parameters is approximable for hyperfinite graphs
\end{itemize}
Our results also give a partial explanation of the success of motifs in the analysis of complex networks.

Fast Moment Estimation in Data Streams in Optimal Space

Daniel M. Kane and Jelani Nelson and Ely Porat and David P. Woodruff
Affiliations: Harvard University, MIT, BarIlan University, IBM Almaden Research Center
Abstract:
We give a spaceoptimal algorithm with
update time $O(\log^2(1/\eps)\log\log(1/\eps))$ for
$(1\pm\eps)$approximating the $p$th frequency moment, $0 < p <
2$, of a length$n$ vector updated in a data stream. This provides a
nearly exponential improvement over the previous spaceoptimal algorithm
of [KaneNelsonWoodruff, SODA 2010], which had update time
$\Omega(1/\eps^2)$.
When combined with the work of [HarveyNelsonOnak, FOCS 2008], we also
obtain the first algorithm for entropy estimation in turnstile streams
which simultaneously achieves nearoptimal space and fast update time.

An Algorithm for the Graph Crossing Number Problem

Julia Chuzhoy
Affiliations: Toyota Technological Institute at Chicago
Abstract:
We study the Minimum Crossing Number
problem: given an nvertex graph G, the goal is to find a drawing of G
in the plane with minimum number of edge crossings. This is one of the
central problems in topological graph theory, that has been studied
extensively over the past three decades. The first nontrivial efficient
algorithm for the problem, due to Leighton and Rao, achieved an
$O(n\log^4n)$approximation for bounded degree graphs. This algorithm
has since been improved by polylogarithmic factors, with the best
current approximation ratio standing on $O (n\poly(d) \log^{3/2}n )$ for
graphs with maximum degree d. In contrast, only APXhardness is known
on the negative side.
In this paper we present an efficient randomized algorithm to find a
drawing of any nvertex graph G in the plane with $O(OPT^{10}\poly(d
\log n))$ crossings, where OPT is the number of crossings in the optimal
solution, and d is the maximum vertex degree in G. This result implies
an $\tilde{O} (n^{9/10} \poly(d))$approximation for Minimum Crossing
Number, thus breaking the longstanding $\tilde{O}(n)$approximation
barrier for boundeddegree graphs.

From Affine to TwoSource Extractors via Approximate Duality

Eli BenSasson, Noga Zewi
Affiliations: Department of Computer Science, Technion  Israel Institute of Technology
Abstract:
Twosource and affine extractors and
dispersers are fundamental objects studied in the context of
derandomization. This paper shows how to construct twosource extractors
and dispersers for arbitrarily small minentropy rate in a blackbox
manner from affine extractors with sufficiently good parameters. Our
analysis relies on the study of approximate duality, a concept related
to the polynomial FreimanRuzsa conjecture (\PFR) from additive
combinatorics.
Two blackbox constructions of twosource extractors from affine ones
are presented. Both constructions work for minentropy rate
$\rho<\frac12$. One of them can potentially reach arbitrarily small
minentropy rate provided the affine extractor used to construct it
outputs, on affine sources of minentropy rate $\frac12$, a relatively
large number of output bits and has sufficiently small error.
Our results are obtained by first showing that each of our constructions
yields a twosource disperser for a certain minentropy rate
$\rho<\frac12$ and then using a general extractortodisperser
reduction that applies to a large family of constructions. This
reduction says that any twosource disperser for minentropy rate $\rho$
coming from this family is also a twosource extractor for minentropy
rate $\rho+\eps$ for arbitrarily small $\eps>0$.
The extractortodisperser reduction arises from studying {\em
approximate duality}, a notion related to additive combinatorics. The
{\em duality measure} of two sets $A,B\subseteq \F_2^n$ aims to quantify
how ``close'' these sets are to being dual and is defined as
\[\dc(A,B)=\left\E_{a\in A, b\in B}\left[(1)^{\sum_{i=1}^n a_i
b_i}\right]\right.\]
Notice that $\dc(A,B)=1$ implies that $A$ is contained in an affine
shift of $B^\perp$  the space dual to the $\F_2$span of $B$. We
study what can be said of $A,B$ when their duality measure is large but
strictly smaller than $1$ and show that $A,B$ contain subsets $A',B'$ of
nontrivial size for which $\dc(A',B')=1$ and consequently $A'$ is
contained in an affine shift of $(B')^\perp$. Surprisingly, the \PFR
implies that such $A',B'$ exist even if the duality measure is
exponentially small in $n$, and this implication leads to twosource
extractors with exponentially small error.

On Optimal SingleItem Auctions

Christos Papadimitriou and George Pierrakos
Affiliations: UC Berkeley
Abstract:
We revisit the problem of designing the
profitmaximizing singleitem auction, solved by Myerson in his seminal
paper for the case in which bidder valuations are independently
distributed. We focus on general joint distributions, seeking the
optimal deterministic incentive compatible auction. We give a geometric
characterization of the optimal auction through a duality theorem,
resulting in an efficient algorithm for finding the optimal
deterministic auction in the twobidder case and an inapproximability
result for three or more bidders.

Directed Spanners via FlowBased Linear Programs

Michael Dinitz and Robert Krauthgamer
Affiliations: Weizmann Institute of Science
Abstract:
We examine directed spanners through flowbased
linear programming relaxations. We design an
$\tilde{O}(n^{2/3})$approximation algorithm for the directed
$k$spanner problem that works for all $k\geq 1$, which is the first
sublinear approximation for arbitrary edgelengths. Even in the more
restricted setting of unit edgelengths, our algorithm improves over the
previous $\tilde{O}(n^{11/k})$ approximation of Bhattacharyya,
Grigorescu, Jung, Raskhodnikova, and Woodruff [SODA 2009] when $k\ge
4$. For the special case of $k=3$ we design a different algorithm
achieving an $\tilde{O}(\sqrt{n})$approximation, improving the
previous $\tilde{O}(n^{2/3})$. (Independently, a recent result
of Berman, Raskhodnikova, and Ruan [FSTTCS 2010] achieves an
approximation of $O(k n^{11/\lceil k/2 \rceil} \log n)$ when edge
lengths are unit). Both of our algorithms easily extend to the
faulttolerant setting, which has recently attracted attention but not
from an approximation viewpoint. We also prove a nearly matching
integrality gap of $\Omega(n^{\frac13  \epsilon})$ for any constant
$\epsilon > 0$.
A virtue of all our algorithms is that they are
relatively simple. Technically, we introduce a new yet natural
flowbased relaxation, and show how to approximately solve it even when
its size is not polynomial. The main challenge is to design a rounding
scheme that "coordinates" the choices of flowpaths between the many
demand pairs while using few edges overall. We achieve this, roughly
speaking, by randomization at the level of vertices.

Mechanism design with uncertain inputs (to err is human, to forgive divine)

Uriel Feige and Moshe Tennenholtz
Abstract:
We consider a task of scheduling with a
common deadline on a single machine. Every player reports to a scheduler
the length of his job and the scheduler needs to finish as many jobs as
possible by the deadline. For this simple problem, there is a truthful
mechanism that achieves maximum welfare in dominant strategies. The new
aspect of our work is that in our setting players are uncertain about
their own job lengths, and hence are incapable of providing truthful
reports (in the strict sense of the word). For a probabilistic model for
uncertainty our main results are as follows.
1) Even with relatively little uncertainty, no mechanism can guarantee a constant fraction of the welfare.
2) To remedy this situation, we introduce a new measure of economic
efficiency, based on a notion of a {\em fair share} of a player, and
design mechanisms that are $\Omega(1)$fair. In addition to its
intrinsic appeal, our notion of fairness implies good approximation of
maximum welfare in several cases of interest.
3) In our mechanisms the machine is sometimes left idle even though
there are jobs that want to use it. We show that this unfavorable aspect
is unavoidable, unless one gives up other favorable aspects (e.g., give
up $\Omega(1)$fairness).
We also consider a qualitative approach to uncertainty as an alternative
to the probabilistic quantitative model. In the qualitative approach we
break away from solution concepts such as dominant strategies (they are
no longer well defined), and instead suggest an axiomatic approach,
which amounts to listing desirable properties for mechanisms. We provide
a mechanism that satisfies these properties.

Santa Claus Schedules Jobs on Unrelated Machines

Ola Svensson
Affiliations: KTH  Royal Institute of Technology
Abstract:
One of the classic results in scheduling theory is
the $2$approximation algorithm by Lenstra, Shmoys, and Tardos for the
problem of scheduling jobs to minimize makespan on unrelated machines,
i.e., job $j$ requires time $p_{ij}$ if processed on machine $i$. More
than two decades after its introduction it is still the algorithm of
choice even in the restricted model where processing times are of the
form $p_{ij} \in \{p_j, \infty\}$. This problem, also known as the
restricted assignment problem, is NPhard to approximate within a factor
less than $1.5$ which is also the best known lower bound for the
general version.
Our main result is a polynomial time algorithm that estimates the
optimal makespan of the restricted assignment problem within a factor
$33/17 + \epsilon \approx 1.9412 + \epsilon$, where $\epsilon > 0$ is
an arbitrary small constant. The result is obtained by upper bounding
the integrality gap of a certain strong linear program, known as
configuration LP, that was previously successfully used for the related
Santa Claus problem. Similar to the strongest analysis for that problem
our proof is based on a local search algorithm that will eventually
find a schedule of the mentioned approximation guarantee, but is not
known to converge in polynomial time.

Rank Bounds for Design Matrices with Applications to Combinatorial Geometry and Locally Correctable Codes

Boaz Barak and Zeev Dvir and Avi Wigderson and Amir Yehudayoff
Affiliations: Microsoft Research New England, Princeton, IAS, Technion
Abstract:
Our main result is a matrix rank lower bound
that follows from the combinatorial structure of its support. A
\emph{$(q,k,t)$design matrix} is an $m \times n$ matrix whose pattern
of zeros/nonzeros satisfies the following designlike conditions: (1)
each row has at most $q$ nonzeros, (2) each column has at least $k$
nonzeros, and (3) the supports of every two columns intersect in at
most $t$ rows. We prove that the rank of any $(q,k,t)$design matrix
over a field of characteristic zero (or sufficiently large finite
characteristic) is at least \[ n  \left(\frac{qtn}{2k}\right)^2 . \]
The proof combines standard eigenvalue
estimates and less standard use of results on matrix {\em scaling}.
Using this result we derive, via combinatorial reductions, the following
(related) applications:
\begin{description}
\item[Impossibility results for $2$query LCCs over large fields.]
A $2$query locally correctable code (LCC) is an error correcting
code in which every codeword coordinate can be recovered,
probabilistically, by reading at most two other code positions. Such
codes have many applications, and constructions (with exponential
encoding length) are known over finite fields of small characteristic.
We show that infinite families of such linear $2$query LCCs \emph{do
not exist} over fields of characteristic zero or large characteristic
regardless of the encoding length.
\item[Generalization of classical theorems in combinatorial
geometry.] We prove the following quantitative analog of the
SylvesterGallai theorem: Let $V$ be a (finite, but arbitrarily large)
set of points in Euclidean space, and call a line {\em special} if it
contains at least three points of $V$. Assume that for every point $v$
in $V$, at least a $\delta$fraction of all points are on special lines
through $v$. Then for every $\delta >0$ the dimension of $V$ is
bounded by $c/\delta^2$ for some absolute constant $c$. This result is
an important special case of the coding result above, and may be viewed
as a ``listdecoding'' version of the original SylvesterGallai theorem
(in which $\delta =1$) and subsequent SzemerediTrotter theorem (in
which $\delta = .999$).
We also use our techniques to derive a few other results in
combinatorial geometry, including an ``averagecase'' version of our
theorem above, as well as a quantitative variant of the MotzkinRabin
theorem (which is a colored version of the SylvesterGallai theorem).
\end{description}

A simpler algorithm and shorter proof for the graph minor decomposition

Kenichi Kawarabayashi and Paul Wollan
Affiliations: National Institute for Informatics Tokyo and University of Rome La Sapienza
Abstract:
At the core of the RobertsonSeymour theory of
graph minors lies a powerful decomposition theorem which captures, for
any fixed graph~$H$, the common structural features of all the graphs
which do not contain $H$ as a minor. Robertson and Seymour used this
result to prove Wagner's Conjecture that finite graphs are
wellquasiordered under the graph minor relation, as well as give a
polynomial time algorithm for the disjoint paths problem when the number
of the terminals is fixed. The theorem has since found numerous
applications, both in graph theory and theoretical computer science.
The original proof runs more than 400 pages and the techniques used are
highly nontrivial.
Robertson and Seymour's proof yields an $O(n^3)$time algorithm to find
the decomposition. In this paper, we give a simplified algorithm for
finding the decomposition based on a new constructive proof of the
decomposition theorem for graphs excluding a fixed minor $H$. The new
proof is both dramatically simpler and shorter, making these results and
techniques more accessible. The algorithm runs in time $O(n^3)$ as
well. Moreover, our proof gives an explicit bound on the constants in
the $O$notation.

Finding topological subgraphs is fixedparameter tractable

Martin Grohe and Kenichi Kawarabayashi and Dániel Marx and Paul Wollan
Abstract:
We show that for every fixed undirected
graph $H$, there is a $O(V(G)^3)$ time algorithm that tests, given a
graph $G$, if $G$ contains $H$ as a topological subgraph (that is, a
subdivision of $H$ is subgraph of $G$). This shows that topological
subgraph testing is fixedparameter tractable, resolving a longstanding
open question of Downey and Fellows from 1992.
As a corollary, for every $H$ we obtain an $O(V(G)^3)$ time algorithm
that tests if there is an immersion of $H$ into a given graph $G$. This
answers another open question raised by Downey and Fellows in 1995.

ConstantRound NonMalleable Commitments from Any OneWay Function

Huijia Lin, Rafael Pass
Affiliations: Cornell University
Abstract:
We show \emph{unconditionally} that the
existence of commitment schemes implies the existence of
\emph{constantround} nonmalleable commitments; earlier protocols
required additional assumptions such as collision resistant hash
functions or subexponential oneway functions.
Our protocol also satisfies the stronger notions of concurrent
nonmalleability and robustness. As a corollary, we establish that
constantround nonmalleable zeroknowledge arguments for $\NP$ can be
based on oneway functions and constantround secure multiparty
computation can be based on enhanced trapdoor permutations; also here,
earlier protocols additionally required either collisionresistant hash
functions or subexponential oneway functions.

Privacypreserving Statistical Estimation with Optimal Convergence Rates

Adam Smith
Affiliations: Pennsylvania State University
Abstract:
Consider an analyst who wants to release aggregate
statistics about a data set containing sensitive information. Using
{\em differentially private} algorithms guarantees that the released
statistics reveal very little about any particular record in the data
set. In this paper we study the asymptotic properties of
differentially private algorithms for statistical inference.
We show that for a large class of statistical
estimators $T$ and input distributions $P$, there is a differentially
private estimator $A_T$ with the same asymptotic distribution as $T$.
That is, the random variables $A_T(X)$ and $T(X)$ converge in
distribution when $X$ consists of an i.i.d. sample from $P$ of
increasing size. This implies that $A_T(X)$ is essentially as good as
the original statistic $T(X)$ for statistical inference. Our
technique applies to (almost) any pair $T,P$ such that $T$ is
asymptotically normal on i.i.d. samples from $P$in particular, to
parametric maximum likelihood estimators and estimators for logistic
and linear regression under standard regularity conditions.
A consequence of our techniques is the
existence of lowspace streaming algorithms whose output converges to
the same asymptotic distribution as a given estimator $T$ (for the
same class of estimators and input distributions as above).

Learning Submodular Functions

Maria Florina Balcan and Nicholas J. A. Harvey
Affiliations: Georgia Institute of
Technology, School of Computer Science and University of Waterloo,
Department of Combinatorics and Optimization.
Abstract:
There has recently been significant interest in
the machine learning and algorithmic game theory communities on
understanding and using submodular functions. Despite this substantial
interest, little is known about their learnability from data. Motivated
by applications such as pricing goods in economics, this paper considers
PACstyle learning of submodular functions in a distributional setting.
A problem instance consists of a distribution on $\set{0,1}^n$ and a
realvalued function on $\set{0,1}^n$ that is nonnegative, monotone,
and submodular. We are given $\poly(n)$ samples from this distribution,
along with the values of the function at those sample points. The task
is to approximate the value of the function to within a multiplicative
factor at subsequent sample points drawn from the same distribution,
with sufficiently high probability. We develop the first theoretical
analysis of this problem, proving a number of important and nearly tight
results:
\begin{itemize}
\item If the function is Lipschitz and the distribution is any product
distribution, such as the uniform distribution, then a good
approximation is possible: there is an algorithm that approximates the
function to within a factor $O (\log (1/\epsilon))$ on a set of measure
$1\epsilon$, for any $\epsilon > 0$.
\item If we do not assume that the distribution is a product
distribution, then we prove a surprising lower bound: no algorithm can
approximate the function to within a factor of $\tilde{O}(n^{1/3})$on a
set of measure $1/2+\epsilon$, for any constant $\epsilon > 0$. This
holds even if the function is Lipschitz.
\item On the other hand, this negative result is nearly tight: for an
arbitrary distribution, there is an algorithm that approximates the
function to within a factor $\sqrt{n}$ on a set of measure $1\epsilon$.
\end{itemize}
Our work combines central issues in optimization (submodular functions
and matroids) with central topics in learning (distributional learning
and PACstyle analyses) and with central concepts in pseudorandomness
(lossless expander graphs). Our analysis involves a twist on the usual
learning theory models and uncovers some interesting structural and
extremal properties of submodular functions, which we suspect are likely
to be useful in other contexts. In particular, to prove our general
lower bound, we use lossless expanders to construct a new family of
matroids which can take wildly varying rank values on superpolynomially
many sets; no such construction was previously known. This construction
shows unexpected extremal properties of submodular functions.

Pseudorandom Generators for Combinatorial Shapes

Parikshit Gopalan and Raghu Meka and Omer Reingold and David Zuckerman
Affiliations: Microsoft Research, SVC and UT Austin and Microsoft Research, SVC and UT Austin
Abstract:
We construct pseudorandom generators for {\sl
combinatorial shapes}, which substantially generalize combinatorial
rectangles, 0/1 halfspaces, and 0/1 modular sums. A function $f:[m]^n
\rightarrow {0,1}$ is an (m,n)combinatorial shape if there exist sets
$A_1,...,A_n \subseteq [m]$ and a symmetric function $h:{0,1}^n >
{0,1} such that $f(x_1,\ldots,x_n) =
h(1_{A_1}(x_1),\ldots,1_{A_n}(x_n))$.
Our generator uses seed length $O(log m + log n + \log^2(1/\eps))$ to
get error $\eps$. When $m =2$, this gives the first generator of seed
length $O(\log n)$ which fools all weightbased tests, meaning that the
distribution of the weight of any subset is $\eps$close to the
appropriate binomial distribution in statistical distance. Along the
way, we give a generator for combinatorial rectangles with seed length
$O(\log^{3/2}n)$ and error $1/\poly(n)$, matching Lu's bound [ICALP
1998].
For our proof we give a simple lemma which allows us to convert
closeness in Kolmogorov (cdf) distance to closeness in statistical
distance. As a corollary of our technique, we give an alternative proof
of a powerful variant of the classical central limit theorem showing
convergence in statistical distance, instead of the usual Kolmogorov
distance.

NPHardness of Approximately Solving Linear Equations Over Reals

Subhash Khot and Dana Moshkovitz
Affiliations: NYU and MIT
Abstract:
In this paper, we consider the problem of
approximately solving a system of homogeneous linear equations over
reals, where each equation contains at most three variables.
Since the allzero assignment always satisfies all the equations
exactly, we restrict the assignments to be ``nontrivial''. Here is an
informal statement of our result: it is NPhard to distinguish whether
there is a nontrivial assignment that satisfies $1\delta$ fraction of
the equations or every nontrivial assignment fails to satisfy a
constant fraction of the equations with a ``margin" of
$\Omega(\sqrt{\delta})$.
We develop linearity and dictatorship testing procedures for functions
$f: \R^n \mapsto \R$ over a Gaussian space, which could be of
independent interest.
We believe that studying the complexity of linear equations over reals,
apart from being a natural pursuit, can lead to progress on the Unique
Games Conjecture.

Towards Coding for Maximum Errors in Interactive Communication

Mark Braverman and Anup Rao
Abstract:
We show that it is possible to encode any
communication protocol between two parties so that the protocol succeeds
even if a $(1/4  \epsilon)$ fraction of all symbols transmitted by
the parties are corrupted adversarially, at a cost of increasing the
communication in the protocol by a constant factor (the constant depends
on epsilon). This encoding uses a constant sized alphabet. This
improves on an earlier result of Schulman, who showed how to recover
when the fraction of errors is bounded by $1/240$. We also show how to
simulate an arbitrary protocol with a protocol using the binary
alphabet, a constant factor increase in communication and tolerating a
$1/8 \epsilon$ fraction of errors.

KMedian Clustering, ModelBased Compressive Sensing, and Sparse Recovery for Earth Mover Distance

Piotr Indyk and Eric Price
Affiliations: MIT
Abstract:
We initiate the study of sparse recovery
problems under the EarthMover Distance (EMD). Specifically, we design a
distribution over m x n matrices A such that for any vector x, given
Ax, we can recover a ksparse approximation to x under the EMD distance.
One construction yields m=O(k log (n/k)) and a 1 + eps approximation
factor, which matches the best achievable bound for other error
measures, such as the l_1 norm.
Our algorithms are obtained by exploiting novel connections to other
problems and areas, such as streaming algorithms for kmedian
clustering and modelbased compressive sensing. We also provide novel
algorithms and results for the latter problems.

Electrical Flows, Laplacian Systems, and Faster Approximation of Maximum Flow in Undirected Graphs

Paul Christiano and Jonathan A. Kelner and Aleksander Madry and Daniel A. Spielman and ShangHua Teng
Abstract:
We introduce a new approach to computing an
approximately maximum st flow in a capacitated, undirected graph. This
flow is computed by solving a sequence of electrical flow problems.
Each electrical flow is given by the solution of a system of linear
equations in a Laplacian matrix, and thus may be approximately computed
in nearlylinear time.
Using this approach, we develop the fastest known algorithm for
computing approximately maximum st flows. For a graph having n
vertices and m edges, our algorithm computes a
(1\epsilon)approximately maximum st flow in time \tilde{O}(mn^{1/3}
\epsilon^{11/3}). A dual version of our approach computes a
(1+\epsilon)approximately minimum st cut in time
\tilde{O}(m+n^{4/3}\eps^{16/3}), which is the fastest known algorithm
for this problem as well. Previously, the
best dependence on m and n was achieved by running the algorithm of
Goldberg and Rao (J. ACM 1998) in the divideandconquer sparsification
framework of Benczur and Karger (CoRR 2002), yielding approximately
maximum st flows in time \tilde{O}(m\sqrt{n}\epsilon^{1}) and
approximately minimum st cuts in time
\tilde{O}(m+n^{3/2}\epsilon^{3}).

A General Framework for Graph Sparsification

Wai Shing Fung and Ramesh Hariharan and Nicholas J. A. Harvey and Debmalya Panigrahi
Affiliations: University of Waterloo and Strand Life Sciences and University of Waterloo and Massachusetts Institute of Technology
Abstract:
We present a general framework for
constructing graph sparsifiersweighted subgraphs for which every cut
has the same weight as the original graph, up to a multiplicative factor
of $(1 \pm \epsilon)$. Using this framework, we simplify, unify and
improve upon previous sparsification results. As simple instantiations
of this framework, we show that sparsifiers can be constructed by
sampling edges according to their {\em strength} (a result of Bencz\'ur
and Karger), {\em effective resistance} (a result of Spielman and
Srivastava), {\em edge connectivity}, or by sampling {\em random
spanning trees}. Sampling according to edge connectivity is the most
aggressive method, and the most challenging to analyze. Our proof that
this method produces sparsifiers resolves an open question of Bencz\'ur
and Karger.
While the above results are interesting from a combinatorial standpoint,
we also prove new algorithmic results. In particular, we develop
techniques that give the first (optimal) $O(m)$time sparsification
algorithm for unweighted graphs. For weighted graphs, our algorithm has a
running time of $O(m) + \tilde{O}(n/\epsilon^2)$, which is also linear
unless the input graph is very sparse itself. In both cases, this
improves upon the previous best running times of $O(m\log^2 n)$ (for the
unweighted case) and $O(m\log^3 n)$ (for the weighted case)
respectively. Our algorithm produces sparsifiers containing
$O(n\log n/\epsilon^2)$ edges in expectation; the only known
construction of sparsifiers with fewer edges is by a substantially
slower algorithm running in $O(n^3 m / \epsilon^2)$ time.
A key ingredient of our proofs is a natural generalization of Karger's
bound on the number of small cuts in an undirected graph. Given the
numerous applications of Karger's bound, we suspect that our
generalization will also be of independent interest.

Breaking O(n^{1/2})approximation algorithms for the edgedisjoint paths problem

Kenichi Kawarabayashi and Yusuke Kobayashi
Abstract:
In the maximum edgedisjoint paths problem, we
are given a graph and a collection of pairs of vertices, and the
objective is to find the maximum number of pairs that can be routed by
edgedisjoint paths. A $c$approximation algorithm for this problem is a
polynomial time algorithm that finds at least $OPT / c$ edgedisjoint
paths, where $OPT$ is the maximum possible. Currently, an
$O(n^{\frac{1}{2}})$approximation algorithm is best known for this
problem even if a congestion of two is allowed, i.e., each edge is
allowed to be used in at most two of the paths.
In this paper, we give the first result that breaks the
$O(n^{\frac{1}{2}})$approximation with congestion two. Specifically, we
give a randomized $O(n^{\frac{3}{7}})$approximation algorithm. Our
framework for this algorithm is more general in a sense. Indeed, we have
two ingredients which also work for the edgedisjoint paths problem
(with congestion one) if the following conjecture is true.
Conjecture: There is a (randomized) polynomialtime algorithm for
finding $\Omega({OPT}^{\frac{1}{p}}/\beta(n))$ edgedisjoint paths
connecting given terminal pairs, where $\beta$ is a polylogarithmic
function.
Having made this conjecture, we prove the following.
(1) Assuming the above conjecture for some $p>1$, for some absolute
constant $\alpha > 0$, we show that by using RaoZhou's algorithm, we
can give a randomized $O(n^{\frac{1}{2}\alpha})$approximation
algorithm for the edgedisjoint paths problem (with congestion one).
(2) Based on the R\"acke decomposition and ChekuriKhannaShepherd
welllinked set, we show that there is a randomized algorithm for
finding $\Omega({OPT}^{\frac{1}{4}})$ edgedisjoint paths connecting
given terminal pairs with congestion two (thus confirming the above
conjecture if we allow congestion to be two).
All of our results still hold for the vertexdisjoint paths problem as
well, i.e., paths are not edgedisjoint, but vertexdisjoint case.

Linearizable Implementations Do Not Suffice for Randomized Distributed Computation

Wojciech Golab and Lisa Higham and Philipp Woelfel
Affiliations: HP Labs and University of Calgary and University of Calgary
Abstract:
Linearizability is the gold standard among
algorithm designers for deducing the correctness of a distributed
algorithm using implemented shared objects from the correctness of the
corresponding algorithm using atomic versions of the same objects. We
show that linearizability does not suffice for this purpose when
processes can exploit randomization, and we discuss the existence of
alternative correctness conditions. This paper makes the following
contributions:
1. Various examples demonstrate that using
wellknown linearizable implementations of objects (e.g., snapshots) in
place of atomic objects can change the probability distribution of the
outcomes that the adversary is able to generate. In some cases, an
oblivious adversary can create a probability distribution of outcomes
for an algorithm with implemented objects, that not even the strong
adversary can generate for the same algorithm with atomic objects.
2. A new correctness condition for shared
object implementations called strong linearizability is defined. We
prove that the strong adversary (one that sees the outcome of each coin
flip immediately) gains no additional power when atomic objects are
replaced by strongly linearizable implementations. Conversely, we
provide evidence that in general no strictly weaker correctness
condition suffices to ensure this.
3. In contrast to the situation for the strong
adversary, for a natural weaker adversary (one that cannot see a
process’ coinflip until its next operation on a shared object) we prove
that there is no correspondingly general correctness condition.
Specifically, any linearizable (even blocking) implementation of
counters from atomic registers and loadlinked/storeconditional objects
that satisfies a natural locality property necessarily gives the weak
adversary more power than it has with atomic counters.

Online Bipartite Matching with Unknown Distributions

Chinmay Karande and Aranyak Mehta and Pushkar Tripathi
Affiliations: Google Research, Google Research, Georgia Institute of Technology
Abstract:
We consider the online bipartite matching
problem in the unknown distribution input model. We show that the
RANKING algorithm of \cite{KVV90} achieves a competitive ratio of at
least 0.653. This is the first analysis to show an algorithm which
breaks the natural 11/e `barrier' in the unknown distribution model
(our analysis in fact works in the stronger, random order model) and
answers an open question in \cite{GM08}. We also describe a family of
graphs on which RANKING does no better than 0.727. Finally, we show
that for graphs which have $k > 1$ disjoint perfect matchings,
RANKING achieves a competitive ratio of at least $1  \sqrt{\frac{1}{k} 
\frac{1}{k^2} + \frac{1}{n}}$  in particular RANKING achieves a
factor of 1o(1) for graphs with $\omega(1)$ disjoint perfect matchings.

Nearoptimal distortion bounds for embedding doubling spaces into $L_1$

James R. Lee and Anastasios Sidiropoulos
Abstract:
We show that there are npoint doubling
metric spaces which requires sqrt{log n/log log n} distortion to embed
in L_1, matching the upper bound of [GuptaKrauthgamerLee FOCS'03] up
to a sqrt{log log n} factor. The best previous lower bound for doubling
spaces, due to [CheegerKleinerNaor FOCS'09] was of the form (log n)^c
for some small, unspecified value of c > 0.
Furthermore, this gives a nearly optimal integrality gap for a weak
version of the SDP for the general Sparsest Cut Problem. The weak SDP
suffices for all known rounding algorithms, and the best previous gap
was of the order (log n)^{1/4} [LeeMoharrami, STOC'10]. We conjecture
that our construction admits an equivalent metric of negative type.
Resolution of this conjecture would lead to an integrality gap of
sqrt{log n/log log n} for the GoemansLinial SDP, nearly matching the
upper bound sqrt{log n} log log n of [AroraLeeNaor, STOC'05].

Mechanisms for (Mis)allocating Scientific Credit

Jon Kleinberg and Sigal Oren
Affiliations: Cornell
Abstract:
Scientific communities confer many forms of {\em
credit}  both implicit and explicit  on their successful members,
and it has long been argued that the motivation provided by these forms
of credit helps to shape a community's collective attention toward
different lines of research. The allocation of scientific credit,
however, has also been the focus of longdocumented pathologies: certain
research questions are said to command too much credit, at the expense
of other equally important questions; and certain researchers (in a
version of Robert Merton's {\em Matthew Effect}) seem to receive a
disproportionate share of the credit, even when the contributions of
others are similar.
Here we show that the presence of each of these pathologies can in fact
increase the collective productivity of a community. We consider a model
for the allocation of credit, in which individuals can choose among
{\em projects} of varying levels of importance and difficulty, and they
compete to receive credit with others who choose the same project. Under
the most natural mechanism for allocating credit, in which it is
divided among those who succeed at a project in proportion to the
project's importance, the resulting selection of projects by
selfinterested, creditmaximizing individuals will in general be
socially suboptimal. However, we show that there exist ways of
allocating credit out of proportion to the true importance of the
projects, as well as mechanisms that assign credit out of proportion to
the relative contributions of the individuals, that lead
creditmaximizing individuals to collectively achieve social optimality.
These results therefore suggest how wellknown forms of misallocation
of scientific credit can in fact serve to channel selfinterested
behavior into socially optimal outcomes.

Submodular Function Maximization via the Multilinear Relaxation and Contention Resolution Schemes

Chandra Chekuri and Jan Vondr�k and Rico Zenklusen
Abstract:
We consider the problem of maximizing a {\em
nonnegative} submodular set function $f:2^N \rightarrow \RR_+$ over a
ground set $N$ subject to a variety of packing type constraints
including (multiple) matroid constraints, knapsack constraints, and
their intersections. In this paper we develop a general framework that
allows us to derive a number of new results, in particular when $f$ may
be a {\em nonmonotone} function. Our algorithms are based on
(approximately) solving the multilinear extension $F$ of $f$
\cite{CCPV07} over a polytope $P$ that represents the constraints, and
then effectively rounding the fractional solution. Although this
approach has been used quite successfully in some settings
\cite{CCPV09,KulikST09,LeeMNS09,CVZ10,BansalKNS10}, it has been limited
in some important ways. We overcome these limitations as follows.
First, we give constant factor approximation algorithms to maximize $F$
over an {\em arbitrary} downclosed polytope $P$ that has an efficient
separation oracle. Previously this was known only for monotone functions
\cite{Vondrak08}. For nonmonotone functions a constant factor was
known only when the polytope was either the intersection of a fixed
number of knapsack constraints \cite{LeeMNS09} or a matroid polytope
\cite{Vondrak09,OV11}. Second, we show that {\em contention resolution
schemes} are an effective way to round the nonlinear function $F$, even
when $f$ is nonmonotone. In particular, contention resolution schemes
for different polytopes can be combined easily to handle the
intersection of different constraints. Via LP duality we show that a
contention resolution scheme for a constraint is exactly related to the
{\em correlation gap} \cite{ADSY10} of weighted rank functions of the
constraint. This leads to an {\em optimal} contention resolution scheme
for the matroid polytope.
Our tools provide a powerful and unifying framework for maximizing
linear and submodular functions over independence constraints. We give
several illustrative examples. Contention resolution schemes by
themselves are of independent interest and may find other applications.

Improved Minimum Cuts and Maximum Flows in Undirected Planar Graphs

Giuseppe F. Italiano, Yahav Nussbaum, Piotr Sankowski, and Christian WulffNilsen
Affiliations: University of Rome "Tor Vergata", Tel Aviv University, University of Warsaw, and Carleton University
Abstract:
In this paper we study minimum cut and
maximum flow problems on planar graphs, both in static and in dynamic
settings. First, we present an algorithm that given an undirected planar
graph computes the minimum cut between any two given vertices in $O(n
\log \log n)$ time. Second, we show how to achieve the same $O(n \log
\log n)$ bound for the problem of computing maximum flows in undirected
planar graphs. To the best of our knowledge, these are the first
algorithms for those two problems that break the $O(n \log n)$ barrier,
which has been standing for more than 25 years. Third, we present a
fully dynamic algorithm that is able to maintain information about
minimum cuts and maximum flows in a plane graph (i.e., a planar graph
with a fixed embedding): our algorithm is able to insert edges, delete
edges and answer mincut and maxflow queries between any pair of
vertices in $O(n^{2/3} \log^3 n) time per operation. This result is
based on a new dynamic shortest path algorithm for planar graphs which
may be of independent interest. We remark that this is the first known
nontrivial algorithm for mincut and maxflow problems in a dynamic
setting.

An Impossibility Result for Truthful Combinatorial Auctions with Submodular Valuations

Shahar Dobzinski
Affiliations: Cornell University
Abstract:
We show that every universally truthful
randomized mechanism for combinatorial auctions with submodular
valuations that provides $m^{\frac 1 2 \epsilon}$ approximation to the
social welfare and uses value queries only must use exponentially many
value queries, where $m$ is the number of items. In contrast, ignoring
incentives there exists constant ratio approximation algorithms for this
problem. Our approach is based on a novel \emph{direct hardness}
approach and completely skips the notoriously hard characterization
step. The characterization step was the main obstacle for proving
impossibility results in algorithmic mechanism design so far.
We demonstrate two additional applications of our new technique: (1) an
impossibility result for universallytruthful polynomial time flexible
combinatorial public projects and (2) an impossibility result for
truthfulinexpectation mechanisms for exact combinatorial public
projects. The latter is the first result that bounds the power of
polynomialtime truthful in expectation mechanisms in any setting.

Geometric Complexity Theory and Tensor Rank

Peter Buergisser and Christian Ikenmeyer
Affiliations: University of Paderborn
Abstract:
Mulmuley and Sohoni [GCT1, SICOMP 2001; GCT2,
SICOMP 2008] proposed to view the permanent versus determinant problem
as a specific orbit closure problem and to attack it by methods from
geometric invariant and representation theory. We adopt these ideas
towards the goal of showing lower bounds on the border rank of specific
tensors, in particular for matrix multiplication. We thus study
specific orbit closure problems for the group $G =GL(W_1)\times
GL(W_2)\times GL(W_3)$ acting on the tensor product $W=W_1\otimes
W_2\otimes W_3$ of complex finite dimensional vector spaces. Let $G_s
=SL(W_1)\times SL(W_2)\times SL(W_3)$. A key idea from [GCT2] is that
the irreducible $G_s$representations occurring in the coordinate ring
of the $G$orbit closure of a stable tensor $w\in W$ are exactly those
having a nonzero invariant with respect to the stabilizer group of $w$.
However, we prove that by considering
$G_s$representations, only trivial lower bounds on border rank can be
shown. It is thus necessary to study $G$representations, which leads to
geometric extension problems that are beyond the scope of the subgroup
restriction problems emphasized in [GCT1, GCT2] and its follow up
papers. We prove a very modest lower bound on the border rank of matrix
multiplication tensors using $G$representations. This shows at least
that the barrier for $G_s$representations can be overcome. To advance,
we suggest the coarser approach to replace the semigroup of
representations of a tensor by its moment polytope. We prove first
results towards determining the moment polytopes of matrix
multiplication and unit tensors.

A quasipolynomialtime algorithm for the quantum separability problem

Fernando Brandao and Matthias Christandl and Jon Yard
Affiliations: Universidade Federal de Minas Gerais, ETH Zurich, Los Alamos National Laboratory
Abstract:
We present a quasipolynomialtime algorithm
solving the weak membership problem for the convex set of separable,
i.e. nonentangled, bipartite density matrices. The algorithm decides
whether a density matrix is separable or epsaway from the set of the
separable states in time exp(O(eps^(2) logA logB)), where A and
B are the local dimensions, and the distance is measured with either
the Euclidean norm, or with the socalled LOCC norm. The latter is an
operationally motivated norm giving the optimal probability of
distinguishing two bipartite quantum states, each shared by two parties,
using any protocol formed by quantum local operations and classical
communication (LOCC) between the parties. We also obtain improved
algorithms for optimizing over the set of separable states and for
computing the groundstate energy of meanfield Hamiltonians.
The techniques we develop are also applied to
quantum MerlinArthur games, where we show that multiple provers are not
more powerful than a single prover when the verifier is restricted to
LOCC protocols, or when the verification procedure is formed by a
measurement of small Euclidean norm. This answers a question posed by
Aaronson et al. (Theory of Computing 5, 1, 2009) and provides two new
characterizations of the complexity class QMA.
Our algorithm uses semidefinite programming to
search for a symmetric extension, as first proposed by Doherty, Parrilo
and Spedialieri (Phys. Rev. A 69, 022308, 2004). The bound on the
runtime follows from an improved de Finettitype bound quantifying the
monogamy of quantum entanglement. This result, in turn, follows from a
new lower bound on the quantum conditional mutual information and the
entanglement measure squashed entanglement.

A Full Derandomization of Schoening's kSAT Algorithm

Robin A. Moser and Dominik Scheder
Affiliations: ETH Zurich
Abstract:
Schoening in 1999 presented a simple
randomized algorithm for kSAT with running time O(a^n * poly(n)) for a =
2(k1)/k. We give a deterministic version of this algorithm running in
time O(a^(n+o(n))).

Rank1 Bimatrix Games: A Homeomorphism and a Polynomial Time Algorithm

Bharat Adsul and Jugal Garg and Ruta Mehta and Milind Sohoni
Affiliations: Indian Institute of Technology, Bombay
Abstract:
Given a rank1 bimatrix game (A,B), i.e.,
where rank(A+B)=1, we construct a suitable linear subspace of the rank1
game space and show that this subspace is homeomorphic to its Nash
equilibrium correspondence. Using this homeomorphism, we give the first
polynomial time algorithm for computing an exact Nash equilibrium of a
rank1 bimatrix game. This settles an open question posed in Kannan and
Theobald (SODA 2007) and Theobald (2007). In addition, we give a novel
algorithm to enumerate all the Nash equilibria of a rank1 game and show
that a similar technique may also be applied for finding a Nash
equilibrium of any bimatrix game. This technique also proves the
existence, oddness and the index theorem of Nash equilibria in a
bimatrix game. Further, we extend the rank1 homeomorphism result to a
fixed rank game space, and give a fixed point formulation on $[0,1]^k$
for solving a rankk game. The homeomorphism and the fixed point
formulation are piecewise linear and considerably simpler than the
classical constructions.

How to Leak on Key Updates

Allison Lewko and Mark Lewko and Brent Waters
Affiliations: University of Texas at Austin, University of Texas at Austin, University of Texas at Austin
Abstract:
In the continual memory leakage model,
security against attackers who can repeatedly obtain leakage is achieved
by periodically updating the secret key. This is an appealing model
which captures a wide class of sidechannel attacks, but all previous
constructions in this model provide only a very minimal amount of
leakage tolerance \emph{during secret key updates}. Since key updates
may happen frequently, improving security guarantees against attackers
who obtain leakage during these updates is an important problem. In this
work, we present the first cryptographic primitives which are secure
against a superlogarithmic amount of leakage during secret key updates.
We present signature and public key encryption schemes in the standard
model which can tolerate a constant fraction of the secret key to be
leaked between updates as well as \emph{a constant fraction of the
secret key and update randomness} to be leaked during updates. Our
signature scheme also allows us to leak a constant fraction of the
entire secret state during signing. Before this work, it was unknown how
to tolerate superlogarithmic leakage during updates even in the random
oracle model. We rely on subgroup decision assumptions in composite
order bilinear groups.

Separating Succinct NonInteractive Arguments From All Falsifiable Assumptions

Craig Gentry and Daniel Wichs
Affiliations: IBM Research and NYU
Abstract:
In this paper, we study succinct computationally
sound proofs (arguments) for NP, whose communication complexity is
polylogarithmic the instance and witness sizes. The seminal works of
Kilian '92 and Micali '94 show that such arguments can be constructed
under standard cryptographic hardness assumptions with four rounds of
interaction, and that they be made noninteractive in the randomoracle
model. The latter construction also gives us some evidence that succinct
non interactive arguments (SNARGs) may exist in the standard model with
a common reference string (CRS), by replacing the oracle with a
sufficiently complicated hash function whose description goes in the
CRS. However, we currently do not know of any construction of SNARGs
with a formal proof of security under any simple cryptographic
assumption.
In this work, we give a broad blackbox
separation result, showing that blackbox reductions cannot be used to
prove the security of any SNARG construction based on any falsifiable
cryptographic assumption. This includes essentially all common
assumptions used in cryptography (oneway functions, trapdoor
permutations, DDH, RSA, LWE etc.). More generally, we say that an
assumption is falsifiable if it can be modeled as an interactive game
between an adversary and an efficient challenger that can efficiently
decide if the adversary won the game. This is similar, in spirit, to the
notion of falsifiability of Naor '03, and captures the fact that we can
efficiently check if an adversarial strategy breaks the assumption.
Our separation result also extends to
designated verifier SNARGs, where the verifier needs a trapdoor
associated with the CRS to verify arguments, and slightly succinct
SNARGs, whose size is only required to be sublinear in the statement and
witness size.

Social Networks Spread Rumors in Sublogarithmic Time

Benjamin Doerr and Mahmoud Fouz and Tobias Friedrich
Affiliations: MaxPlanckInstitut Informatik, Germany
Abstract:
With the prevalence of social networks, it
has become increasingly important to understand their features and
limitations. It has been observed that information spreads extremely
fast in social networks. We study the performance of randomized rumor
spreading protocols on graphs in the preferential attachment model. The
wellknown random phone call model of Karp et al. (FOCS 2000) is a
pushpull strategy where in each round, each vertex chooses a random
neighbor and exchanges information with it. We prove the following.
(i) The pushpull strategy delivers a message to all nodes within
$\Theta(\log n)$ rounds with high probability. The best known bound so
far was $\Oh(\log^2 n)$.
(ii) If we slightly modify the protocol so that contacts are chosen
uniformly from all neighbors but the one contacted in the previous
round, then this time reduces to $\Theta(\log n / \log\log n)$, which is
the diameter of the graph. This is the first time that a sublogarithmic
broadcast time is proven for a natural setting. Also, this is the first
time that avoiding doublecontacts reduces the runtime to a smaller
order of magnitude.

Highrate codes with sublineartime decoding

Swastik Kopparty and Shubhangi Saraf and Sergey Yekhanin
Affiliations: Insitute for Advanced Study, Massachusetts Institute of Technology, Microsoft Research Silicon Valley
Abstract:
Locally decodable codes are errorcorrecting codes
that admit efficient decoding algorithms; any bit of the original
message can be recovered by looking at only a small number of locations
of a corrupted codeword. The tradeoff between the rate of a code and the
locality/efficiency of its decoding algorithms has been well studied,
and it has widely been suspected that nontrivial locality must come at
the price of low rate. A particular setting of potential interest in
practice is codes of constant rate. For such codes, decoding algorithms
with locality $O(k^{\epsilon})$ were known only for codes of rate
$\exp(1/\epsilon)$, where $k$ is the length of the message.
Furthermore, for codes of rate $> 1/2$, no nontrivial locality has
been achieved.
In this paper we construct a new family of
locally decodable codes that have very efficient local decoding
algorithms, and at the same time have rate approaching $1$. We show that
for every $\epsilon > 0$ and $\alpha > 0$, for infinitely many
$k$, there exists a code $C$ which encodes messages of length $k$ with
rate $1  \alpha$, and is locally decodable from a constant fraction of
errors using $O(k^{\epsilon})$ queries and time. The high rate and
local decodability are evident even in concrete settings (and not just
in asymptotic behavior), giving hope that local decoding techniques may
have practical implications.
These codes, which we call multiplicity codes,
are based on evaluating high degree multivariate polynomials and their
derivatives. Multiplicity codes extend traditional multivariate
polynomial based codes; they inherit the localdecodability of these
codes, and at the same time achieve better tradeoffs and flexibility in
their rate and distance.

The Computational Complexity of Linear Optics

Scott Aaronson and Alex Arkhipov
Affiliations: MIT
Abstract:
We give new evidence that quantum
computersmoreover, rudimentary quantum computers built entirely out
of linearoptical elementscannot be efficiently simulated by
classical computers. In particular, we define a model of computation in
which identical photons are generated, sent through a linearoptical
network, then nonadaptively measured to count the number of photons in
each mode. This model is not known or believed to be universal for
quantum computation, and indeed, we discuss the prospects for realizing
the model using current technology. On the other hand, we prove that
the model is able to solve sampling problems and search problems that
are classically intractable under plausible assumptions.
Our first result says that, if there exists a
polynomialtime classical algorithm that samples from the same
probability distribution as a linearoptical network, then P^#P=BPP^NP,
and hence the polynomial hierarchy collapses to the third level.
Unfortunately, this result assumes an extremely accurate simulation.
Our main result suggests that even an
approximate or noisy classical simulation would already imply a collapse
of the polynomial hierarchy. For this, we need two unproven
conjectures: the PermanentofGaussians Conjecture, which says that it
is #Phard to approximate the permanent of a matrix A of independent
N(0,1) Gaussian entries, with high probability over A; and the Permanent
AntiConcentration Conjecture, which says that Per(A) >=
sqrt(n!)/poly(n) with high probability over A. We present evidence for
these conjectures, both of which seem interesting even apart from our
application.
This paper does not assume knowledge of quantum
optics. Indeed, part of its goal is to develop the beautiful theory of
noninteracting bosons underlying our model, and its connection to the
permanent function, in a selfcontained way accessible to theoretical
computer scientists.
Nicole Immorlica and Adam Tauman Kalai and Brendan Lucier and Ankur Moitra and Andrew Postlewaite and Moshe Tennenholtz
Affiliations: Department of Computer
Science, Northwestern University; Microsoft Research New England;
Department of Computer Science, University of Toronto; MIT; Department
of Economics, University of Pennsylvania; Microsoft R&D Israel and
the Technion, Israel
Abstract:
We revisit classic algorithmic search and
optimization problems from the perspective of competition. Rather than a
single optimizer minimizing expected cost, we consider a zerosum game
in which a search problem is presented to two players, whose only goal
is to {\em outperform the opponent}. Such games are typically
exponentially large zerosum games, but they often have a rich
structure. We provide general techniques by which such structure can be
leveraged to find minmaxoptimal and approximate minmaxoptimal
strategies. We give examples of ranking, hiring, compression, and
binary search duels, among others. We give bounds on how often one can
beat the classic optimization algorithms in such duels.

Blackbox identity testing for bounded top fanin depth3 circuits: the field doesn't matter

Nitin Saxena and C. Seshadhri
Affiliations: Hausdorff Center for Mathematics and Sandia National Laboratories
Abstract:
Let C be a depth3 circuit with n variables,
degree d and top fanin k (called \sps(k,d,n) circuits) over base field
F. It is a major open problem to design a deterministic polynomial time
blackbox algorithm that tests if C is identically zero. Klivans &
Spielman (STOC 2001) observed that the problem
is open even when k is a constant. This case has been subjected to a
serious study over the past few years, starting from the work of Dvir
& Shpilka (STOC 2005).
We give the first polynomial time blackbox algorithm for this problem.
Our algorithm runs in time poly(n)d^k, regardless of the base field. The
\emph{only} field for which polynomial time algorithms were previously
known
is F = Q (Kayal & Saraf, FOCS 2009, and Saxena & Seshadhri, FOCS
2010). This is the first blackbox algorithm for depth3 circuits that
does not use the rank based approaches of Karnin & Shpilka (CCC
2009).
We prove an important tool for the study of depth3 identities. We design
a blackbox polynomial time transformation that reduces the number of variables
in a \sps(k,d,n) circuit to k variables, but preserves the identity structure.

Don't Rush into a Union: Take Time to Find Your Roots

Mihai Patrascu and Mikkel Thorup
Affiliations: ATT Labs
Abstract:
We present a new threshold phenomenon in data
structure lower bounds where slightly reduced update times lead to
exploding query times. Consider incremental connectivity, letting $t_u$
be the time to insert an edge and $t_q$ be the query time. For $t_u =
\Omega(t_q)$, the problem is equivalent to the wellunderstood
\emph{unionfind} problem: $\proc{InsertEdge}(s,t)$ can be implemented
by $\proc{Union}(\proc{Find}(s), \proc{Find}(t))$. This gives worstcase
time $t_u = t_q = O(\lg n / \lg\lg n)$ and amortized $t_u = t_q =
O(\alpha(n))$.
By contrast, we show that if $t_u = o(\lg n / \lg\lg n)$, the query time
explodes to $t_q \ge n^{1o(1)}$. In other words, if the data structure
doesn't have time to find the roots of each disjoint set (tree) during
edge insertion, there is no effective way to organize the information!
For amortized complexity, we demonstrate a new inverseAckermann type tradeoff in the regime $t_u = o(t_q)$.
A similar lower bound is given for fully dynamic connectivity, where an
update time of $o(\lg n)$ forces the query time to be $n^{1o(1)}$. This
lower bound allows for amortization and Las Vegas randomization, and
comes close to the known $O(\lg n \cdot (\lg\lg n)^{O(1)})$ upper bound.

From Convex Optimization to Randomized Mechanisms: Toward Optimal Combinatorial Auctions for Submodular Bidders

Shaddin Dughmi and Tim Roughgarden and Qiqi Yan
Affiliations: Stanford University
Abstract:
We design a polynomialtime,
truthfulinexpectation, $(11/e)$approximation mechanism for welfare
maximization for a fundamental class of combinatorial auctions. Our
results apply to bidders with valuations that are {\em matroid rank sums
(MRS)}, which encompass most concrete examples of submodular functions
studied in this context, including coverage functions and matroid
weightedrank functions. Our approximation factor is the best possible,
even for known and explicitly given coverage valuations, assuming $P
\neq NP$.
Our mechanism is the first truthfulinexpectation and polynomialtime
mechanism to achieve a constantfactor approximation for an $NP$hard
welfare maximization problem in combinatorial auctions with
heterogeneous goods and restricted valuations.
Our mechanism is an instantiation of a new framework for designing
approximation mechanisms based on randomized rounding algorithms. A
typical such algorithm first optimizes over a fractional relaxation of
the original problem, and then randomly rounds the fractional solution
to an integral one. With rare exceptions, such algorithms cannot be
converted into truthful mechanisms. The highlevel idea of our
mechanism design framework is to optimize {\em directly over the
(random) output of the rounding algorithm}, rather than over the {\em
input} to the rounding algorithm. This approach leads to
truthfulinexpectation mechanisms, and these mechanisms can be
implemented efficiently when the corresponding objective function is
concave. For bidders with MRS valuations, we give a novel randomized
rounding algorithm that leads to both a concave objective function and a
$(11/e)$approximation of the optimal welfare.
Our approximation mechanism also provides an interesting separation
between the power of maximalindistributionalrange mechanisms and that
of universally truthful (or deterministic) VCGbased mechanisms, which
cannot achieve a constantfactor approximation for welfare maximization
with MRS valuations.

Privately Releasing Conjunctions and the Statistical Query Barrier

Anupam Gupta and Moritz Hardt and Aaron Roth and Jonathan Ullman
Affiliations: Carnegie Mellon University and Princeton University and Microsoft Research New England and Harvard University
Abstract:
Suppose we would like to know all answers to a
set of statistical queries C on a data set up to small error, but we
can only access the data itself using statistical queries. A trivial
solution is to exhaustively ask all queries in C. Can we do any better?
1. We show that the number of statistical queries
necessary and sufficient for this task isup to polynomial
factorsequal to the agnostic learning complexity of C in Kearns'
statistical query (SQ) model. This gives a complete answer to the
question when running time is not a concern.
2. We then show that the problem can be solved
efficiently (allowing arbitrary error on a small fraction of queries)
whenever the answers to C can be described by a submodular function.
This includes many natural concept classes, such as graph cuts and
Boolean disjunctions and conjunctions.
While interesting from a learning theoretic point of view, our main applications are in privacypreserving data analysis:
Here, our second result leads to an algorithm that
efficiently releases differentially private answers to all Boolean
conjunctions with 1% average error. This makes progress on an open
problem in privacypreserving data analysis.
Our first result on the other hand gives
unconditional lower bounds on any differentially private algorithm that
admits a (potentially nonprivacypreserving) implementation using only
statistical queries. Not only our algorithms, but also most known
private algorithms can be implemented using only statistical queries,
and hence are constrained by these lower bounds. Our result therefore
isolates the complexity of agnostic learning in the SQmodel as a new
barrier in the design of differentially private algorithms.

Optimal Path Search in Small Worlds: Dimension Matters

George Giakkoupis and Nicolas Schabanel
Affiliations: Universit� Paris Diderot and CNRS/Universit� Paris Diderot
Abstract:
We consider Kleinberg's celebrated smallworld
model (2000). This model is based on a $d$dimensional grid graph of $n$
nodes, augmented by a constant number of longrange links per node. It
is known that this graph has diameter $\Theta(\log n)$, and that a
simple greedy search algorithm visits an expected number of
$\Theta(\log^2 n)$ nodes, which is asymptotically optimal over all
decentralized search algorithms.
Besides the number of nodes visited, a relevant measure is the length of
the path constructed by the search algorithm. A decentralized algorithm
by Lebhar and Schabanel (2003) constructs paths of expected length
$O(\log n(\log\log n)^2)$ by visiting the same number of nodes as greedy
search. A natural question, posed by Kleinberg (2006), is whether there
are decentralized algorithms that construct paths of length $O(\log n)$
while visiting only a polylogarithmic number of nodes.
In this paper we resolve this question. For grid dimensionality $d=1$,
we answer the question in the negative, by showing that any
decentralized algorithm that visits a polylogarithmic number of nodes
constructs paths of expected length $\Omega(\log n \log\log n)$. Further
we show that this bound is tight; a simple variant of the algorithm by
Lebhar and Schabanel matches this bound. For dimensionality $d\geq2$,
however, we answer the question in the affirmative; the bound is
achieved by essentially the same algorithm we used for $d=1$. This is
the first time that such a dichotomy, based on the dimensionality $d$,
has been observed for an aspect of this model. Our results are
applicable to the design of peertopeer networks, where the length of
the path along which data are transferred is critical for the network's
performance.

Distributed Verification and Hardness of Distributed Approximation

Atish Das Sarma and Stephan Holzer and
Liah Kor and Amos Korman and Danupon Nanongkai and Gopal Pandurangan and
David Peleg and Roger Wattenhofer
Affiliations: Google Research, ETH
Zurich, The Weizmann Institute of Science, Univ. Paris 7, Georgia
Institute of Technology & the University of Vienna, and Nanyang
Technological University & Brown University
Abstract:
We study the {\em verification} problem in
distributed networks, stated as follows. Let $H$ be a subgraph of a
network $G$ where each vertex of $G$ knows which edges incident on it
are in $H$. We would like to verify whether $H$ has some properties,
e.g., if it is a tree or if it is connected. We would like to perform
this verification in a decentralized fashion via a distributed
algorithm. The time complexity of verification is measured as the number
of rounds of distributed communication.
In this paper we initiate a systematic study of distributed
verification, and give almost tight lower bounds on the running time of
distributed verification algorithms for many fundamental problems such
as connectivity, spanning connected subgraph, and $st$ cut
verification. We then show applications of these results in deriving
strong unconditional time lower bounds on the {\em hardness of
distributed approximation} for many classical optimization problems
including minimum spanning tree, shortest paths, and minimum cut. Many
of these results are the first nontrivial lower bounds for both exact
and approximate distributed computation and they resolve previous open
questions. Moreover, our unconditional lower bound of approximating
minimum spanning tree (MST) subsumes and improves upon the previous
hardness of approximation bound of Elkin [STOC 2004] as well as the
lower bound for (exact) MST computation of Peleg and Rubinovich [FOCS
1999]. Our result implies that there can be no distributed
approximation algorithm for MST that is significantly faster than the
current exact algorithm, for {\em any} approximation factor.
Our lower bound proofs show an interesting connection between
communication complexity and distributed computing which turns out to be
useful in establishing the time complexity of exact and approximate
distributed computation of many problems.

Parallel Repetition of Entangled Games

Julia Kempe and Thomas Vidick
Affiliations: LRI, University of ParisSud, Orsay, France and UC Berkeley
Abstract:
We consider oneround games between a
classical referee and two players. One of the main questions in this
area is the parallel repetition question: Is there a way to decrease the
maximum winning probability of a game without increasing the number of
rounds or the number of players? Classically, efforts to resolve this
question, open for many years, have culminated in Raz's celebrated
parallel repetition theorem on one hand, and in efficient product
testers for PCPs on the other.
In the case where players share entanglement, the only previously known
results are for special cases of games, and are based on techniques that
seem inherently limited. Here we show for the first time that the
maximum success probability of entangled games can be reduced through
parallel repetition, provided it was not initially 1.
Our proof is inspired by a seminal result of Feige and Kilian in the
context of classical twoprover oneround interactive proofs. One of the
main components in our proof is an orthogonalization lemma for
operators, which might be of independent interest.

Subspace Embeddings for the L_1norm with Applications

Christian Sohler and David Woodruff
Affiliations: TU Dortmund and IBM Almaden
Abstract:
We show there is a distribution over linear mappings
$R:\ell_1^n \rightarrow \ell_1^{O(d \log d)}$, such that with
arbitrarily large constant probability, for any fixed $d$dimensional
subspace $L$, for all $x \in L$ we have $\x\_1 \leq \Rx\_1 =
O(d^{1.5} \log d)\x\_1$. This provides the first analogue of the
ubiquitous subspace JohnsonLindenstrauss embedding for the
$\ell_1$norm. Importantly, the target dimension and distortion are
independent of the ambient dimension $n$. We give several applications
of this result, including faster computation of wellconditioned bases,
faster algorithms for least absolute deviation regression and
$\ell_1$norm best fit hyperplane problems, as well as the first single
pass streaming algorithms with low space for these problems. These
results are motivated by practical problems in image analysis, spam
detection, and statistics, where the $\ell_1$norm is used in studies
where outliers may be safely and effectively ignored. This is because
the $\ell_1$norm is more robust to outliers than the $\ell_2$norm.

A Unified Framework for Approximating and Clustering Data

Dan Feldman and Michael Langberg
Affiliations: Caltech and OU
Abstract:
Given a set $F$ of $n$ positive functions over a
ground set $X$, we consider the problem of computing $x^*$ that
minimizes $\cost(F,x)=\sum_{f\in F}f(x)$, over $x\in X$. A typical
application is \emph{shape fitting}, where we wish to approximate a set
$P$ of $n$ elements (say, points) by a shape $x$ from a (possibly
infinite) family $X$ of shapes. Here, each point $p\in P$ corresponds to
a function $f$ such that $f(x)$ is the distance from $p$ to $x$, and we
seek a shape $x$ that minimizes the sum of distances from each point in
$P$. In the $k$clustering variant, each $x\in X$ is a tuple of $k$
shapes, and $f(x)$ is the distance from $p$ to its closest shape in $x$.
Our main result is a unified framework for constructing {\em coresets}
and {\em approximate clustering} for such general sets of functions. To
achieve our results, we forge a link between the classic and well
defined notion of $\eps$approximations from the theory of PAC Learning
and VC dimension, to the relatively new (and not so consistent) paradigm
of coresets, which are some kind of ``compressed representation" of the
input set $F$. Using traditional techniques, a coreset usually implies
an LTAS (linear time approximation scheme) for the corresponding
optimization problem, which can be computed in parallel, via one pass
over the data, and using only polylogarithmic space (i.e, in the
streaming model). For several function families $F$ for which coresets
are known not to exist, or the corresponding (approximate) optimization
problems are hard, our framework yields \emph{bicriteria}
approximations.
We demonstrate our unified framework by applying it on projective
clustering problems. We obtain new coreset constructions and
significantly smaller coresets, over the ones that appeared in the
literature during the past years, for problems such as:
\begin{itemize}
\item $k$Median [HarPeled and Mazumdar,STOC'04], [Chen, SODA'06], [Langberg and Schulman, SODA'10].
\item $k$Line median [Feldman, Fiat and Sharir, FOCS'06], [Deshpande and Varadarajan, STOC'07].
\item Projective clustering [Deshpande et al., SODA'06] [Deshpande and Varadarajan, STOC'07].
\item Linear $\ell_p$ regression [Clarkson and Woodruff, STOC'09 ].
\item Lowrank approximation [Sarl{\'o}s, FOCS'06].
\item Subspace approximation [Shyamalkumar and Varadarajan, SODA'07], [Feldman, Monemizadeh, Sohler and Woodruff, SODA'10],
[Deshpande, Tulsiani, and Vishnoi, SODA'11].
\end{itemize}
The running times of the corresponding optimization problems are also significantly improved.
We show how to generalize the results of our framework for squared
distances (as in $k$mean), distances to the $q$th power, and
deterministic constructions.

The Equivalence of the Random Oracle Model and the Ideal Cipher Model, Revisited

Thomas Holenstein and Robin K�nzler and Stefano Tessaro
Abstract:
We consider the cryptographic problem of
constructing an invertible random permutation from a public random
function (i.e., which can be accessed by the adversary). This goal is
formalized by the notion of indifferentiability of Maurer et al. (TCC
2004). This is the natural extension to the public setting of the
wellstudied problem of building random permutations from random
functions, which was first solved by Luby and Rackoff (Siam J. Comput.,
'88) using the socalled Feistel construction.
The most important implication of such a construction is the equivalence
of the random oracle model (Bellare and Rogaway, CCS '93) and the ideal
cipher model, which is typically used in the analysis of
several constructions in symmetric cryptography.
Coron et al. (CRYPTO 2008) gave a rather involved proof that the
sixround Feistel construction with independent random round functions
is indifferentiable from an invertible random permutation. Also, it is
known that fewer than six rounds do not suffice for indifferentiability.
The first contribution (and starting point) of our paper is a concrete
distinguishing attack which shows that the indifferentiability proof of
Coron et al. is not correct. In addition, we provide supporting evidence
that an indifferentiability
proof for the sixround Feistel construction may be very hard to find.
To overcome this gap, our main contribution is a proof that the Feistel
construction with eighteen rounds is indifferentiable from an invertible
random permutation. The approach of our proof relies on assigning to
each of the rounds in the construction a unique and specific role needed
in the proof. This avoids many of the problems that appear in the
sixround case.

Limits of Provable Security From Standard Assumptions

Rafael Pass
Affiliations: Cornell University
Abstract:
We show that the security of some wellknown
cryptographic protocols, primitives and assumptions (e.g., the Schnorr
identification scheme, adaptive selectivedecommitment secure
commitments, the onemore inversion RSA assumption) cannot be based on
\emph{any standard assumption} using a Turing (i.e., blackbox)
reduction.
These results follow from a general result showing that Turing
reductions cannot be used to prove security of \emph{constantround
sequentially witnesshiding specialsound protocols} for \emph{unique
witness} relations, based on standard assumptions; we emphasize that
this result holds even if the protocol makes \emph{nonblackbox} use of
the underlying assumption.

Optimal Auctions with Correlated Bidders are Easy

Shahar Dobzinski and Hu Fu and Robert Kleinberg
Affiliations: Cornell University
Abstract:
We consider the problem of designing a
revenuemaximizing auction for a single item, when the values of the
bidders are drawn from a correlated distribution. We observe that there
exists an algorithm that finds the optimal randomized mechanism that
runs in time polynomial in the size of the support. We leverage this
result to show that in the oracle model introduced by Ronen and Saberi
[FOCS'02], there exists a polynomial time truthful in expectation
mechanism that provides a $(1.5+\epsilon)$approximation to the revenue
achievable by an optimal truthfulinexpectation mechanism, and a
polynomial time deterministic truthful mechanism that guarantees $\frac 5
3$ approximation to the revenue achievable by an optimal deterministic
truthful mechanism.
We show that the $\frac 5 3$approximation mechanism provides the same
approximation ratio also with respect to the optimal
truthfulinexpectation mechanism. This shows that the performance gap
between truthfulinexpectation and deterministic mechanisms is
relatively small. En route, we solve an open question of Mehta and
Vazirani [EC'04].
Finally, we extend some of our results to the multiitem case, and show
how to compute the optimal truthfulinexpectation mechanisms for
bidders with more complex valuations.

Strong Direct Product Theorems for Quantum Communication and Query Complexity

Alexander A. Sherstov
Affiliations: Microsoft Research, New England
Abstract:
A strong direct product theorem (SDPT) states
that solving n instances of a problem requires Omega(n) times the
resources for a single instance, even to achieve success probability
exp(Omega(n)). We prove that quantum communication complexity obeys an
SDPT whenever the communication lower bound for a single instance is
proved by the generalized discrepancy method, the strongest technique
in that model. We prove that quantum query complexity obeys an SDPT
whenever the query lower bound for a single instance is proved by the
polynomial method, one of the two main techniques in that model. In
both models, we prove the corresponding XOR lemmas and threshold direct
product theorems.

Inner Product Spaces for MinSum Coordination Mechanisms

Richard Cole and Jose R. Correa and Vasilis Gkatzelis and Vahab Mirrokni and Neil Olver
Affiliations: Courant Institute NYU,
Departamento de Ingenieria Industrial, Universidad de Chile, Courant
Institute NYU, Google Research NY, Department of Mathematics MIT
Abstract:
We study policies aiming to minimize the
weighted sum of completion times of jobs in the context of coordination
mechanisms for selfish scheduling problems. Our goal is to design local
policies that achieve a good price of anarchy in the resulting
equilibria for unrelated machine scheduling. To obtain the approximation
bounds, we introduce a new technique that while conceptually simple,
seems to be quite powerful. The method entails mapping strategy vectors
into a carefully chosen inner product space; costs are shown to
correspond to the norm in this space, and the Nash condition also has a
simple description. With this structure in place, we are able to prove a
number of results, as follows.
First, we consider Smith's rule, which orders the jobs on a machine in
ascending processing time to weight ratio, and show that it achieves an
approximation ratio of $4$. We also demonstrate that this is the best
possible for deterministic nonpreemptive strongly local policies. Since
Smith's rule is always optimal for a given fixed assignment, this may
seem unsurprising. But we then show that better approximation ratios can
be obtained, if either preemption or randomization is allowed.
We prove that ProportionalSharing, a preemptive strongly local policy,
achieves an approximation ratio of $2.618$ for the weighed sum of
completion times, and an approximation ratio of $2.5$ in the unweighted
case. Again, we observe that these bounds are tight. Next, we consider
Rand, a natural nonpreemptive but \emph{randomized} policy. We show
that it achieves an approximation ratio of at most $2.13$; moreover, if
the sum of the weighted completion times is negligible compared to the
cost of the optimal solution, this improves to $\pi/2$.
Finally, we show that both ProportionalSharing and Rand induce potential
games, and thus always have a pure Nash equilibrium (unlike Smith's
rule). This also allows us to design the first \emph{combinatorial}
constantfactor approximation algorithm minimizing weighted completion
time for unrelated machine scheduling. It achieves a factor of
$2+\epsilon$ for any $\epsilon > 0$, and involves imitating best
response dynamics using a variant of \pro\ as the policy.

The Topology of Wireless Communication

Erez Kantor and Zvi Lotker and Merav Parter and David Peleg
Affiliations: Technion, Ben Gurion University and Weizmann Institute of Science
Abstract:
In this paper we study the topological properties
of wireless communication maps and their usability in algorithmic
design. We consider the SINR model, which compares the received power
of a signal at a receiver against the sum of strengths of other
interfering signals plus background noise. To describe the behavior of a
multistation network, we use the convenient representation of a
\emph{reception map}. In the SINR model, the resulting \emph{SINR
diagram} partitions the plane into reception zones, one per station, and
the complementary region of the plane where no station can be heard.
\emph{SINR diagrams} have been studied in \cite{Avin2009PODC} for the
specific case where all stations use the same power. It is shown that
the reception zones are convex (hence connected) and fat, and this is
used to devise an efficient algorithm for the fundamental problem of
point location. Here we consider the more general (and common) case
where transmission energies are arbitrary (or nonuniform). Under that
setting, the reception zones are not necessarily convex or even
connected. This poses the algorithmic challenge of designing efficient
point location techniques for the nonuniform setting, as well as the
theoretical challenge of understanding the geometry of SINR diagrams
(e.g., the maximal number of connected components they might have). We
achieve several results in both directions. We establish a form of
weaker convexity in the case where stations are aligned on a line and
use this to derive a tight bound on the number of connected components
in this case. In addition, one of our key results concerns the behavior
of a $(d+1)$dimensional map, i.e., a map in one dimension higher than
the dimension in which stations are embedded. Specifically, although the
$d$dimensional map might be highly fractured, drawing the map in one
dimension higher ``heals'' the zones,
which become connected (in fact hyperbolically connected). As an
algorithmic application of this ``hyperbolic convexity'' property, we
propose a new variant of approximate point location. In addition, as a
step toward establishing a weaker form of convexity for the
$d$dimensional map, we study the interference function and show that it
satisfies the maximum principle. This is done through an analysis
technique based on looking at the behavior of systems composed on lines
of densely placed weak stations, as the number of stations tends to
infinity, keeping their total transmission energy fixed.

An LLLReduction Algorithm with Quasilinear Time Complexity

Andrew Novocin and Damien Stehlé and Gilles Villard
Affiliations: ENS de Lyon
Abstract:
We devise an algorithm, softL1, with the
following specifications: It takes as input an arbitrary basis B=(b_i)_i
in Z^{d x d} of a Euclidean lattice L; It computes a basis of L which
is reduced for a mild modification of the LenstraLenstraLovász
reduction; It terminates in time O(d^{5+eps} beta + d^{omega+1+eps}
beta^{1+eps}) where beta = log max b_i (for any eps > 0 and omega
is a valid exponent for matrix multiplication). This is the first
LLLreducing algorithm with a time complexity that is quasilinear in
the bitlength beta of the entries and polynomial in the dimension d.
The backbone structure of softL1 is able to mimic the KnuthSchönhage
fast gcd algorithm thanks to a combination of cuttingedge ingredients.
First the bitsize of our lattice bases can be decreased via truncations
whose validity are backed by recent numerical stability results on the
QR matrix factorization. Also we establish a new framework for analyzing
unimodular transformation matrices which reduce shifts of reduced
bases, this includes bitsize control and new perturbation tools. We
illustrate the power of this framework by generating a family of
reduction algorithms.

From LowDistortion Norm Embeddings to Explicit Uncertainty Relations and Efficient Information Locking

Omar Fawzi and Patrick Hayden and Pranab Sen
Affiliations: McGill University, Perimeter Institute and Tata Institute of Fundamental Research
Abstract:
Quantum uncertainty relations are at the heart of
many quantum cryptographic protocols performing classically impossible
tasks. One direct operational manifestation of these uncertainty
relations is a purely quantum effect referred to as information locking.
A locking scheme can be viewed as a cryptographic protocol in which a
uniformly random nbit message is encoded in a quantum system using a
classical key of size much smaller than n. Without the key, no
measurement of this quantum state can extract more than a negligible
amount of information about the message (the message is "locked").
Furthermore, knowing the key, it is possible to recover (or "unlock")
the message. In this paper, we make the following contributions by
exploiting a connection between uncertainty relations and lowdistortion
embeddings of L2 into L1:
 We introduce the notion of metric uncertainty
relations and connect it to lowdistortion embeddings of L2 into L1 . A
metric uncertainty relation also implies an entropic uncertainty
relation.
 We prove that random bases satisfy
uncertainty relations with a stronger deﬁnition and better parameters
than previously known. Our proof is also considerably simpler than
earlier proofs. We apply this result to show the existence of locking
schemes with key size independent of the message length.
 We give efficient constructions of bases
satisfying metric uncertainty relations. These bases are computable by
quantum circuits of almost linear size. This leads to the first explicit
construction of a strong information locking scheme. Moreover, we
present a locking scheme that can in principle be implemented with
current technology. These constructions are obtained by adapting an
explicit norm embedding due to Indyk (STOC'07) and an extractor
construction of Guruswami, Umans and Vadhan (JACM'09).
 We apply our metric uncertainty relations to
give communication protocols that perform equalitytesting of nqubit
states. We prove that this task can be performed by a single message
protocol using O(log(1/e)) qubits and n bits of communication, where e
is an error parameter. We also give a single message protocol that uses
O(log(n)^2) qubits, where the computation of the sender is efficient.

The Power of Simple Tabulation Hashing

Mihai Patrascu and Mikkel Thorup
Affiliations: ATT Labs
Abstract:
Randomized algorithms are often enjoyed for their
simplicity, but the hash functions used to yield the desired theoretical
guarantees are often neither simple nor practical. Here we show that
the simplest possible tabulation hashing provides unexpectedly strong
guarantees.
The scheme itself dates back to Carter and Wegman (STOC'77). Keys are
viewed as consisting of $c$ characters. We initialize $c$ tables $T_1,
\dots, T_c$ mapping characters to random hash code. A key $x=(x_1,
\dots, x_q)$ is hashed to $T_1[x_1] \oplus \cdots \oplus T_c[x_c]$,
where $\oplus$ denotes xor.
While this scheme is not even 4independent, we show that it provides
many of the guarantees that are normally obtained via higher
independence, e.g., Chernofftype concentration, minwise hashing for
estimating set intersection, and cuckoo hashing.

Subexponential lower bounds for randomized pivoting rules for the simplex algorithm

Oliver Friedmann and Thomas Dueholm Hansen and Uri Zwick
Affiliations: University of Munich, Aarhus University, Tel Aviv University
Abstract:
The \emph{simplex} algorithm is among the most
widely used algorithms for solving \emph{linear programs} in practice.
Most \emph{deterministic} pivoting rules are known, however, to require
an exponential number of steps to solve some linear programs. No
nonpolynomial lower bounds were known, prior to this work, for
\emph{randomized} pivoting rules. We provide the first
\emph{subexponential} (i.e., of the form~$2^{\Omega(n^\alpha)}$, for
some $\alpha>0$) lower bounds for the two most natural, and most
studied, randomized pivoting rules suggested to date.
The first randomized pivoting rule we consider
is {\sc RandomEdge}, which among all improving pivoting steps (or
\emph{edges}) from the current basic feasible solution (or
\emph{vertex}) chooses one uniformly at random. The second randomized
pivoting rule we consider is {\sc RandomFacet}, a more complicated
randomized pivoting rule suggested by Kalai (1992) Matou{\v{s}}ek,
Sharir and Welzl (1996). Our lower bound for the {\sc RandomFacet}
pivoting rule essentially matches the subexponential upper bounds of
Kalai and Matou{\v{s}}ek et al. Lower bounds for {\sc RandomEdge} and
{\sc RandomFacet} were known before only in \emph{abstract} settings,
and not for concrete linear programs.
Our lower bounds are obtained by utilizing
connections between pivoting steps performed by simplexbased algorithms
and \emph{improving switches} performed by \emph{policy iteration}
algorithms for 1player and 2player games. We start by building
2player \emph{parity games} (PGs) on which suitable randomized policy
iteration algorithms perform a subexponential number of iterations. We
then transform these 2player games into 1player \emph{Markov Decision
Processes} (MDPs) which correspond almost immediately to concrete linear
programs.

Almost Settling the Hardness of Noncommutative Determinant

Steve Chien and Prahladh Harsha and Alistair Sinclair and Srikanth Srinivasan
Abstract:
In this paper, we study the complexity of
computing the determinant of a matrix over a noncommutative algebra. In
particular, we ask the question, "over which algebras, is the
determinant easier to compute than the permanent?" Towards resolving
this question, we show the following hardness and easiness of
noncommutative determinant computation.
* [Hardness] Computing the determinant of a nxn matrix whose entries are
themselves 2�2 matrices over a finite field is as hard as computing the
permanent. This extends the recent result of Arvind and Srinivasan, who
proved a similar result which however required the entries to be of
linear dimension.
* [Easiness] Determinant of a n�n matrix whose entries are themselves
d�d upper triangular matrices can be computed in poly(n^d) time.
Combining the above with the decomposition theorem of finite dimensional
algebras (in particular exploiting the simple structure of 2x2 matrix
algebras), we can extend the above hardness and easiness statements to
more general algebras as follows. Let A be a finite dimensional algebra
over a finite field with radical R(A).
* [Hardness] If the quotient A/R(A) is noncommutative, then computing
the determinant over the algebra A is as hard as computing the
permanent.
* [Easiness] If the quotient A/R(A) is commutative and furthermore, R(A)
has nilpotency index d (i.e., the smallest d such that R(A)d = 0), then
there exists a poly(n^d)time algorithm that computes determinants over
the algebra A.
In particular, for any constant dimensional algebra A over a finite
field, since the nilpotency index of R(A) is at most a constant, we have
the following dichotomy theorem: if A/R(A) is commutative, then
efficient determinant computation is feasible and otherwise determinant
is as hard as permanent.

Approximate Polytope Membership Queries

Sunil Arya and Guilherme D. da Fonseca and David M. Mount
Affiliations: HKUST and UNIRIO and University of Maryland
Abstract:
Convex polytopes are central to
computational and combinatorial geometry. In this paper, we consider an
approximate version of a fundamental geometric search problem, polytope
membership queries. Given a convex polytope $P$ in $\RE^d$, presented as
the intersection of halfspaces, the objective is to preprocess $P$ so
that, given a query point $q$, it is possible to determine efficiently
whether $q$ lies inside $P$ subject to an allowed error $\eps$.
Previous solutions to this problem were based on straightforward
applications of classic polytope approximation techniques by Dudley
(1974) and Bentley et al.\ (1982). The former yields minimum storage,
the latter yields constant query time, and a spacetime tradeoff can be
obtained by interpolating between the two. We present the first
significant improvements to this tradeoff. For example, using the same
storage as Dudley, we reduce the query time from $O(1/\eps^{(d1)/2})$
to $O(1/\eps^{(d1)/4})$. Our approach is based on a very simple
construction algorithm, whose analysis is surprisingly nontrivial. Both
lower bounds and upper bounds on the performance of the algorithm are
presented.
To establish the relevance of our results, we introduce a reduction from
approximate nearest neighbor searching to approximate polytope
membership queries. Remarkably, we show that our tradeoff provides
significant improvements to the best known spacetime tradeoffs for
approximate nearest neighbor searching. Furthermore, this is achieved
with constructions that are much simpler than existing methods.

Exact Algorithms for Solving Stochastic Games

Kristoffer Arnsfelt Hansen and Michal Kouck� and Niels Lauritzen and Peter Bro Miltersen and Elias P. Tsigaridas
Affiliations: Aarhus University and Czech Academy of Sciences and Aarhus University and Aarhus University and Aarhus University
Abstract:
Shapley's stochastic games and Everett's recursive
games are classical models of game theory describing twoplayer zerosum
games of potentially infinite duration. In this paper we describe
algorithms for exactly solving these games. As the values of both kinds
of games may be irrational numbers, our algorithms output the value of
the game given as input in isolating interval representation. When the
number of positions of the game is constant, our algorithms run in
polynomial time and are the first algorithms with this property. As a
byproduct, we give new improved upper and lower bounds on the algebraic
degree of the values of stochastic and recursive games as a function of
the combinatorial parameters of the game. As another byproduct, we give a
new upper bound on the complexity of the strategy iteration algorithm
for concurrent reachability games, a wellstudied special case of
recursive games. This upper bound is exponential even when the number of
positions is constant, but prior to this paper only a doubly
exponential upper bound on the complexity of strategy iteration was
known, even for this case.

Online Bipartite Matching with Random Arrivals: A Strongly Factor Revealing LP Approach

Mohammad Mahdian and Qiqi Yan
Affiliations: Yahoo! Research and Stanford University
Abstract:
In a seminal result, Karp, Vazirani, and Vazirani show that
a simple ranking algorithm achieves an optimal competitive ratio of
$11/e$ for the online bipartite matching problem. They also prove
that when nodes arrive in a random order, a simple greedy algorithm
achieves a $11/e$ factor. In this paper, we prove that in the random
arrival model, the ranking algorithm of KarpVaziraniVazirani has
competitive ratio at least 0.696, beating the $11/e\approx0.632$
barrier in the adversarial model. Our result also extends to the
i.i.d.\ distribution model of Feldman et al., strengthening their result
by removing the assumption that the distribution is known.
Our analysis has two main steps. First, we exploit certain dominance
and monotonicity properties of the algorithm to derive a family of
factorrevealing linear programs (LPs). In particular, by symmetry of
the ranking algorithm in the randomarrivals model, we have the
monotonicity property on both sides of the graph, which gives good
``strength'' to the LPs. Second, To obtain a good lowerbound on the
optimal values of these LPs and hence on the competitive ratio of the
algorithm, we introduce the technique of strongly factorrevealing
LPs. In particular, we derive a family of modified LPs with similar
strength such that the optimal value of {\em any one} of these new LPs
is a lowerbound on the competitive ratio of the algorithm. This
enables us to leverage the power of computer LPsolvers to solve for
large instances of the new LPs to establish bounds that would
otherwise be difficult to attain by human analysis.

Estimating the
unseen: an n/log(n)sample estimator for entropy, support size, and
other distribution properties, with a proof of optimality via two new
central limit theorems

Gregory Valiant and Paul Valiant
Affiliations: UC Berkeley
Abstract:
We introduce a new approach to characterizing
the unobserved portion of a distribution, which provides
sublinearsample additive estimators for a class of properties that
includes entropy and distribution support size. Together with our new
lower bound, this settles the longstanding question of the sample
complexities of these estimation problems (up to constant factors).
Our algorithm estimates these properties up to
an arbitrarily small additive constant, using O(n/ log n) samples, where
n is a bound on the support size, or in the case of estimating the
support size, 1/n is a lower bound the probability of any element of the
domain. Previously, no explicit sublinearsample algorithms for either
of these problems were known. Additionally, our algorithm runs in time
linear in the number of samples used.
In the second half of the paper, we provide a
matching lower bound of Omega(n/log n) samples for estimating entropy or
distribution support size to within an additive constant. The previous
lowerbounds on these sample complexities were n / 2^O(sqrt(log n)). To
show our lower bound, we prove two new and natural multivariate central
limit theorems; the first uses Stein's method to relate the sum of
independent distributions to the multivariate Gaussian of corresponding
mean and covariance, under the earthmover distance metric (also known as
the Wasserstein metric). We leverage this central limit theorem to
prove a stronger but more specific central limit theorem for
“generalized multinomial” distributions—a large class of discrete
distributions, parameterized by matrices, that represents sums of
independent binomial or multinomial distributions, and describes many
distributions encountered in computer science. Convergence here is in
the strong sense of statistical distance, which immediately implies that
any algorithm with input drawn from a generalized multinomial
distribution behaves essentially as if the input were drawn from a
discretized Gaussian with the same mean and covariance. Such tools in
the multivariate setting are rare, and we hope this new tool will be of
use to the community.

Almost Tight Bounds for Reordering Buffer Management

Anna Adamaszek and Artur Czumaj and Matthias Englert and Harald Raecke
Affiliations: University of Warwick
Abstract:
In the reordering buffer management problem a
stream of colored items arrives at a service station and has to be
processed. The cost for serving the items heavily depends on the
processing order: serving an item with color $c$, when the most recently
served item had color $c'\neq c$, incurs a context switching cost
$w_c$.
In order to reduce the total processing cost, the serving station is
equipped with a reordering buffer able to store $k$ items. This buffer
can be used to reorder the input sequence in a restricted fashion to
construct an output sequence with lower processing cost. At each point
in time, the buffer contains the first $k$ items of the input sequence
that have not yet been processed. A scheduling strategy has to decide
which item to serve next. Upon its decision, the corresponding item is
removed from the buffer and served, while the next item from the input
sequence takes its place in the buffer.
This simple and versatile framework has many important applications in
areas like production engineering, computer graphics, storage systems,
and information retrieval, among others. Our results are as follows.
We present the first nontrivial lower bounds on the competitive ratio
of online algorithms. Specifically, we show that any deterministic
online algorithm for the reordering buffer management problem has a
competitive ratio of at least $\Omega(\sqrt{\log k/\log\log k})$, even
in the uniform case when $w_c=1$ for all colors $c$.
Next, we complement this lower bound by presenting a determinstic online
algorithm for the reordering buffer management problem that obtains a
competitive ratio of $O(\sqrt{\log k})$, almost matching the lower
bound. This improves upon a recent algorithm by AvigdorElgrabli and
Rabani (SODA 2010) that achieves a competitive ratio of $O(\log
k/\log\log k)$.
Finally, we show a lower bound of $\Omega(\log\log k)$ on the
competitive ratio of any randomized online algorithm. In particular, our
paper gives the first proof that no online algorithm (either
deterministic or randomized) for the reordering buffer management
problem can achieve a constant competitive ratio.

BlackBox Identity Testing of Depth4 Multilinear Circuits

Shubhangi Saraf and Ilya Volkovich
Affiliations: CSAIL, MIT and Faculty of Computer Science, Technion
Abstract:
We study identity testing for multilinear
$\Spsp(k)$ circuits, i.e. multilinear depth$4$ circuits with fanin
$k$ at the top $+$ gate. We give the first polynomialtime deterministic
identity testing algorithm for such circuits. Our results also hold in
the blackbox model.
The running time of our algorithm is $n^{\BigO(k)} \cdot
s^{\BigO(k^3)}$, where $n$ is the number of variables, $s$ is the size
of the circuit and $k$ is the fanin of the top gate. The importance of
this model arises from \cite{AgrawalVinay08}, where it was shown that
derandomizing blackbox polynomial identity testing for general depth 4
circuits implies a derandomization of polynomial identity testing (PIT)
for general arithmetic circuits. Prior to our work, the best PIT
algorithm for multilinear $\Spsp(k)$ circuits~\cite{KMSV10} ran in
quasipolynomialtime, with the running time being $n^{\BigO(k^6 \log(k)
\log^2 s )}$.
We obtain our results by showing a strong {\em structural result} for
multilinear $\Spsp(k)$ circuits that compute the zero polynomial. We
show that under some mild conditions, any gate of such a circuit must
compute a {\em sparse} polynomial. We then show how to combine the
structure theorem with a result by Klivans and
Spielman~\cite{KlivansSpielman01} on the identity testing for sparse
polynomials to yield the full result.

Moser and Tardos meet Lov\'asz

Kashyap Kolipaka and Mario Szegedy
Affiliations: Rutgers University
Abstract:
Beck's early work [Random Struct. Algorithms'91]
gave an Efficient Version of the Lov\'asz Local Lemma(LLL) with a
significant compromise in the parameters. Following several
improvements, Moser [STOC'09], and Moser and Tardos [J.ACM'10] obtained
asymptotically optimal results in terms of the maximal degree.
For a fixed dependency graph $G$ the exact
criterion under which LLL applies is given by Shearer
[Combinatorica'85]. For a dependency structure G, let $\Shearer(G)$ be
the set of those probability assignments to the nodes of G for which the
Lov\'asz Local Lemma holds. We show that:
\begin{enumerate}
\item The MoserTardos algorithm is
efficient up to Shearer's bound, by giving a tighter analysis. We also
prove that, whenever ${p\in \Shearer(G)/(1+\epsilon)}$, the algorithm
runs in expected time at most $n /\epsilon$, where $n$ is the number
nodes in G.
\item By adding a few lines to our analysis we
can reprove Shearer's result (for the {\em general} LLL). Our
alternative proof for the Shearer's bound not only highlights the
connection between the variable and general versions of LLL, but also
illustrates that variants of the MoserTardos algorithm can be useful in
existence proofs.
\item We obtain new formulas for phase
transitions in the hardcore lattice gas model, nontrivially equivalent
to the ones studied by Scott and Sokal [Combinatorics, Probability and
Computing' 06].
\item We prove that if ${p\in
\Shearer(G)/(1+\epsilon)}$, the running time of the MoserTardos
algorithm is polynomial not only in the number of events, but also in
the number of variables. This extends one of the results from the more
recent work of HaeuplerSahaSrinivasan [FOCS'10].
\item Our new formulas immediately give a majorizing lemma that connects LLL bounds on different graphs.
\item We show that the LLL bound for the
(special case of the) variable version is sometimes larger than for the
general version. This is the first known separation between the variable
and the general versions of LLL.
\end{enumerate}

On the Complexity of Powering in Finite Fields

Swastik Kopparty
Affiliations: Institute for Advanced Study
Abstract:
We study the complexity of powering in GF(2^n) by
constant depth arithmetic circuits over GF(2) (also known as
AC^0(parity)). Our study encompasses the basic arithmetic operations
such as computing cuberoots and cubicresiduosity of elements of
GF(2^n). Our main result is that these operations require exponential
size circuits.
The proof of the main result revisits the
classical RazborovSmolensky method, and executes an analogue of it in
the land of univariate polynomials over GF(2^n). We exploit the dual
view of GF(2^n) as GF(2)^n, and take advantage of tools from both these
views. In recent years, this interplay between GF(2^n) and GF(2)^n has
played an important role in pseudorandomness, property testing and
coding theory.
We also derive strong averagecase versions of
these results. For example, we show that no subexponentialsize,
constantdepth, arithmetic circuit over GF(2) can correctly compute the
cubic residue symbol for more than 1/3 + o(1) fraction of the elements
of GF(2^n). As a corollary, we deduce a character sum bound showing that
the cubic residue character over GF(2^n) is uncorrelated with all
degreed nvariate GF(2) polynomials (viewed as functions over GF(2^n)
in a natural way), provided d << n^{0.1}.