Excellent! Next you can
create a new website with this list, or
embed it in an existing web page by copying & pasting
any of the following snippets.
**JavaScript**
(easiest)
**PHP**
**iFrame**
(not recommended)

```
<script src="https://bibbase.org/show?bib=NoahSD123.github.io%2Fnsd.bib&jsonp=1"></script>
```

```
<?php
$contents = file_get_contents("https://bibbase.org/show?bib=NoahSD123.github.io%2Fnsd.bib");
print_r($contents);
?>
```

```
<iframe src="https://bibbase.org/show?bib=NoahSD123.github.io%2Fnsd.bib"></iframe>
```

For more details see the documention.

This is a preview! To use this list on your own web site
or create a new web site from it,
create a free account. The file will be added
and you will be able to edit it in the File Manager.
We will show you instructions once you've created your account.

To the site owner:

**Action required!** Mendeley is changing its
API. In order to keep using Mendeley with BibBase past April
14th, you need to:

- renew the authorization for BibBase on Mendeley, and
- update the BibBase URL in your page the same way you did when you initially set up this page.

2018
(4)

(Gap/S)ETH Hardness of SVP.
Aggarwal, D.; and Stephens-Davidowitz, N.
In *STOC*, 2018.

Paper link bibtex abstract 83 downloads

Paper link bibtex abstract 83 downloads

@inproceedings{ASGapETH18, title = {({{Gap}}/{{S}}){{ETH Hardness}} of {{SVP}}}, abstract = {$ \newcommand{\problem}[1]{\ensuremath{\mathrm{#1}} } \newcommand{\SVP}{\problem{SVP}} \newcommand{\ensuremath}[1]{#1} $We prove the following quantitative hardness results for the Shortest Vector Problem in the $\ell_p$ norm ($\SVP_p$), where $n$ is the rank of the input lattice. $\bullet$ For "almost all" $p > p_0 \approx 2.1397$, there no $2^{n/C_p}$-time algorithm for $\SVP_p$ for some explicit constant $C_p > 0$ unless the (randomized) Strong Exponential Time Hypothesis (SETH) is false. $\bullet$ For any $p > 2$, there is no $2^{o(n)}$-time algorithm for $\SVP_p$ unless the (randomized) Gap-Exponential Time Hypothesis (Gap-ETH) is false. Furthermore, for each $p > 2$, there exists a constant $\gamma_p > 1$ such that the same result holds even for $\gamma_p$-approximate $\SVP_p$. $\bullet$ There is no $2^{o(n)}$-time algorithm for $\SVP_p$ for any $1 \leq p \leq 2$ unless either (1) (non-uniform) Gap-ETH is false; or (2) there is no family of lattices with exponential kissing number in the $\ell_2$ norm. Furthermore, for each $1 \leq p \leq 2$, there exists a constant $\gamma_p > 1$ such that the same result holds even for $\gamma_p$-approximate $\SVP_p$.}, url = {http://arxiv.org/abs/1712.00942}, booktitle = {{{STOC}}}, author = {Aggarwal, Divesh and {Stephens-Davidowitz}, Noah}, year = {2018} }

$ \newcommand{\problem}[1]{\ensuremath{\mathrm{#1}} } \newcommand{§VP}{\problem{SVP}} \newcommand{\ensuremath}[1]{#1} $We prove the following quantitative hardness results for the Shortest Vector Problem in the $\ell_p$ norm ($§VP_p$), where $n$ is the rank of the input lattice. $•$ For "almost all" $p > p_0 ≈ 2.1397$, there no $2^{n/C_p}$-time algorithm for $§VP_p$ for some explicit constant $C_p > 0$ unless the (randomized) Strong Exponential Time Hypothesis (SETH) is false. $•$ For any $p > 2$, there is no $2^{o(n)}$-time algorithm for $§VP_p$ unless the (randomized) Gap-Exponential Time Hypothesis (Gap-ETH) is false. Furthermore, for each $p > 2$, there exists a constant $γ_p > 1$ such that the same result holds even for $γ_p$-approximate $§VP_p$. $•$ There is no $2^{o(n)}$-time algorithm for $§VP_p$ for any $1 ≤ p ≤ 2$ unless either (1) (non-uniform) Gap-ETH is false; or (2) there is no family of lattices with exponential kissing number in the $\ell_2$ norm. Furthermore, for each $1 ≤ p ≤ 2$, there exists a constant $γ_p > 1$ such that the same result holds even for $γ_p$-approximate $§VP_p$.

Just Take the Average! An Embarrassingly Simple $2^n$-Time Algorithm for SVP (and CVP).
Aggarwal, D.; and Stephens-Davidowitz, N.
In *SOSA*, 2018.

Paper link bibtex abstract 65 downloads

Paper link bibtex abstract 65 downloads

@inproceedings{ASJustTake18, title = {Just Take the Average! {{An}} Embarrassingly Simple $2^n$-Time Algorithm for {{SVP}} (and {{CVP}})}, abstract = {We show a $2^{n+o(n)}$-time (and space) algorithm for the Shortest Vector Problem on lattices (SVP) that works by repeatedly running an embarrassingly simple "pair and average" sieving-like procedure on a list of lattice vectors. This matches the running time (and space) of the current fastest known algorithm, due to Aggarwal, Dadush, Regev, and Stephens-Davidowitz (ADRS, in STOC, 2015), with a far simpler algorithm. Our algorithm is in fact a modification of the ADRS algorithm, with a certain careful rejection sampling step removed. The correctness of our algorithm follows from a more general "meta-theorem," showing that such rejection sampling steps are unnecessary for a certain class of algorithms and use cases. In particular, this also applies to the related $2^{n + o(n)}$-time algorithm for the Closest Vector Problem (CVP), due to Aggarwal, Dadush, and Stephens-Davidowitz (ADS, in FOCS, 2015), yielding a similar embarrassingly simple algorithm for $\gamma$-approximate CVP for any $\gamma = 1+2^{-o(n/\log n)}$. (We can also remove the rejection sampling procedure from the $2^{n+o(n)}$-time ADS algorithm for exact CVP, but the resulting algorithm is still quite complicated.)}, url = {http://arxiv.org/abs/1709.01535}, booktitle = {{{SOSA}}}, author = {Aggarwal, Divesh and {Stephens-Davidowitz}, Noah}, year = {2018} }

We show a $2^{n+o(n)}$-time (and space) algorithm for the Shortest Vector Problem on lattices (SVP) that works by repeatedly running an embarrassingly simple "pair and average" sieving-like procedure on a list of lattice vectors. This matches the running time (and space) of the current fastest known algorithm, due to Aggarwal, Dadush, Regev, and Stephens-Davidowitz (ADRS, in STOC, 2015), with a far simpler algorithm. Our algorithm is in fact a modification of the ADRS algorithm, with a certain careful rejection sampling step removed. The correctness of our algorithm follows from a more general "meta-theorem," showing that such rejection sampling steps are unnecessary for a certain class of algorithms and use cases. In particular, this also applies to the related $2^{n + o(n)}$-time algorithm for the Closest Vector Problem (CVP), due to Aggarwal, Dadush, and Stephens-Davidowitz (ADS, in FOCS, 2015), yielding a similar embarrassingly simple algorithm for $γ$-approximate CVP for any $γ = 1+2^{-o(n/łog n)}$. (We can also remove the rejection sampling procedure from the $2^{n+o(n)}$-time ADS algorithm for exact CVP, but the resulting algorithm is still quite complicated.)

Generalizations of Banaszczyk's Transference Theorems and Tail Bound.
Miller, S. D.; and Stephens-Davidowitz, N.
2018.

Paper link bibtex abstract 1 download

Paper link bibtex abstract 1 download

@unpublished{MSGeneralizationsBanaszczyk18, title = {Generalizations of {{Banaszczyk}}'s Transference Theorems and Tail Bound}, abstract = {We generalize Banaszczyk's seminal tail bound for the Gaussian mass of a lattice to a wide class of test functions. We therefore obtain quite general transference bounds, as well as bounds on the number of lattice points contained in certain bodies. As example applications, we bound the lattice kissing number in $\ell_p$ norms by $e^{(n+ o(n))/p}$ for $0 < p \leq 2$, and also give a proof of a new transference bound in the $\ell_1$ norm.}, url = {http://arxiv.org/abs/1802.05708}, author = {Miller, Stephen D. and {Stephens-Davidowitz}, Noah}, year = {2018} }

We generalize Banaszczyk's seminal tail bound for the Gaussian mass of a lattice to a wide class of test functions. We therefore obtain quite general transference bounds, as well as bounds on the number of lattice points contained in certain bodies. As example applications, we bound the lattice kissing number in $\ell_p$ norms by $e^{(n+ o(n))/p}$ for $0 < p ≤ 2$, and also give a proof of a new transference bound in the $\ell_1$ norm.

New (and Old) Proof Systems for Lattice Problems.
Alamati, N.; Peikert, C.; and Stephens-Davidowitz, N.
In *PKC*, 2018.

Paper link bibtex abstract 19 downloads

Paper link bibtex abstract 19 downloads

@inproceedings{APSNewOld18, title = {New (and Old) Proof Systems for Lattice Problems}, abstract = {$ \newcommand{\Otil}{\ensuremath{\widetilde{O}}} \newcommand{\problem}[1]{\ensuremath{\mathsf{#1}}} \newcommand{\gapspp}{\problem{GapSPP}} \newcommand{\oNP}{\mathsf{coNP}} \newcommand{\eps}{\epsilon} \newcommand{\ensuremath}[1]{#1} $We continue the study of statistical zero-knowledge (SZK) proofs, both interactive and noninteractive, for computational problems on point lattices. We are particularly interested in the problem $\gapspp$ of approximating the $\epsilon$-smoothing parameter (for some $\epsilon < 1/2$) of an $n$-dimensional lattice. The smoothing parameter is a key quantity in the study of lattices, and $\gapspp$ has been emerging as a core problem in lattice-based cryptography, e.g., in worst-case to average-case reductions. We show that $\gapspp$ admits SZK proofs for remarkably low approximation factors, improving on prior work by up to roughly $\sqrt{n}$. Specifically: $\bullet$ There is a noninteractive SZK proof for $O(\log(n) \sqrt{\log (1/\eps)})$-approximate $\gapspp$. Moreover, for any negligible $\epsilon$ and a larger approximation factor $\Otil(\sqrt{n \log(1/\eps)})$, there is such a proof with an efficient prover. $\bullet$ There is an (interactive) SZK proof with an efficient prover for $O(\log n + \sqrt{\log(1/\eps)/\log n})$-approximate $\problem{coGapSPP}$. We show this by proving that $O(\log n)$-approximate $\gapspp$ is in $\oNP$. In addition, we give an (interactive) SZK proof with an efficient prover for approximating the lattice covering radius to within an $O(\sqrt{n})$ factor, improving upon the prior best factor of $\omega(\sqrt{n \log n})$.}, url = {https://eprint.iacr.org/2017/1226}, booktitle = {{{PKC}}}, author = {Alamati, Navid and Peikert, Chris and {Stephens-Davidowitz}, Noah}, year = {2018} }

$ \newcommand{Øtil}{\ensuremath{\widetilde{O}}} \newcommand{\problem}[1]{\ensuremath{\mathsf{#1}}} \newcommand{\gapspp}{\problem{GapSPP}} \newcommand{øNP}{\mathsf{coNP}} \newcommand{\eps}{ε} \newcommand{\ensuremath}[1]{#1} $We continue the study of statistical zero-knowledge (SZK) proofs, both interactive and noninteractive, for computational problems on point lattices. We are particularly interested in the problem $\gapspp$ of approximating the $ε$-smoothing parameter (for some $ε < 1/2$) of an $n$-dimensional lattice. The smoothing parameter is a key quantity in the study of lattices, and $\gapspp$ has been emerging as a core problem in lattice-based cryptography, e.g., in worst-case to average-case reductions. We show that $\gapspp$ admits SZK proofs for remarkably low approximation factors, improving on prior work by up to roughly $\sqrt{n}$. Specifically: $•$ There is a noninteractive SZK proof for $O(łog(n) \sqrt{łog (1/\eps)})$-approximate $\gapspp$. Moreover, for any negligible $ε$ and a larger approximation factor $Øtil(\sqrt{n łog(1/\eps)})$, there is such a proof with an efficient prover. $•$ There is an (interactive) SZK proof with an efficient prover for $O(łog n + \sqrt{łog(1/\eps)/łog n})$-approximate $\problem{coGapSPP}$. We show this by proving that $O(łog n)$-approximate $\gapspp$ is in $øNP$. In addition, we give an (interactive) SZK proof with an efficient prover for approximating the lattice covering radius to within an $O(\sqrt{n})$ factor, improving upon the prior best factor of $ω(\sqrt{n łog n})$.

2017
(6)

An Inequality for Gaussians on Lattices.
Regev, O.; and Stephens-Davidowitz, N.
*SIDMA*. 2017.

Paper link bibtex abstract 47 downloads

Paper link bibtex abstract 47 downloads

@article{RSInequalityGaussians17, title = {An Inequality for {{Gaussians}} on Lattices}, abstract = {$ \newcommand{\R}{\ensuremath{\mathbb{R}}} \newcommand{\lat}{\mathcal{L}} \newcommand{\ensuremath}[1]{#1} $We show that for any lattice $\lat \subseteq \R^n$ and vectors $\vec{x}, \vec{y} \in \R^n$, \[ \rho(\lat + \vec{x})^2 \rho(\lat + \vec{y})^2 \leq \rho(\lat)^2 \rho(\lat + \vec{x} + \vec{y}) \rho(\lat + \vec{x} - \vec{y}) \; , \] where $\rho$ is the Gaussian measure $\rho(A) := \sum_{\vec{w} \in A} \exp(-\pi \| \vec{w} \|^2)$. We show a number of applications, including bounds on the moments of the discrete Gaussian distribution, various monotonicity properties of the heat kernel on flat tori, and a positive correlation inequality for Gaussian measures on lattices.}, url = {http://arxiv.org/abs/1502.04796}, journal = {SIDMA}, author = {Regev, Oded and {Stephens-Davidowitz}, Noah}, year = {2017} }

$ \newcommand{\R}{\ensuremath{ℝ}} \newcommand{łat}{\mathcal{L}} \newcommand{\ensuremath}[1]{#1} $We show that for any lattice $łat ⊆ \R^n$ and vectors $ěc{x}, ěc{y} ∈ \R^n$, \[ ρ(łat + ěcx)^2 ρ(łat + ěcy)^2 ≤ ρ(łat)^2 ρ(łat + ěcx + ěcy) ρ(łat + ěcx - ěcy) , \] where $ρ$ is the Gaussian measure $ρ(A) := ∑_{ěc{w} ∈ A} \exp(-π \| ěc{w} \|^2)$. We show a number of applications, including bounds on the moments of the discrete Gaussian distribution, various monotonicity properties of the heat kernel on flat tori, and a positive correlation inequality for Gaussian measures on lattices.

On the Quantitative Hardness of CVP.
Bennett, H.; Golovnev, A.; and Stephens-Davidowitz, N.
In *FOCS*, 2017.

Paper princeton talk link bibtex abstract 68 downloads

Paper princeton talk link bibtex abstract 68 downloads

@inproceedings{BGSQuantitativeHardness17, title = {On the Quantitative Hardness of {{CVP}}}, abstract = {$ \newcommand{\eps}{\epsilon} \newcommand{\problem}[1]{\ensuremath{\mathrm{#1}} } \newcommand{\CVP}{\problem{CVP}} \newcommand{\SVP}{\problem{SVP}} \newcommand{\CVPP}{\problem{CVPP}} \newcommand{\ensuremath}[1]{#1} $For odd integers $p \geq 1$ (and $p = \infty$), we show that the Closest Vector Problem in the $\ell_p$ norm ($\CVP_p$) over rank $n$ lattices cannot be solved in $2^{(1-\eps) n}$ time for any constant $\eps > 0$ unless the Strong Exponential Time Hypothesis (SETH) fails. We then extend this result to "almost all" values of $p \geq 1$, not including the even integers. This comes tantalizingly close to settling the quantitative time complexity of the important special case of $\CVP_2$ (i.e., $\CVP$ in the Euclidean norm), for which a $2^{n +o(n)}$-time algorithm is known. In particular, our result applies for any $p = p(n) \neq 2$ that approaches $2$ as $n \to \infty$. We also show a similar SETH-hardness result for $\SVP_\infty$; hardness of approximating $\CVP_p$ to within some constant factor under the so-called Gap-ETH assumption; and other quantitative hardness results for $\CVP_p$ and $\CVPP_p$ for any $1 \leq p < \infty$ under different assumptions......}, url = {http://arxiv.org/abs/1704.03928}, url_Princeton_Talk = {http://www.youtube.com/watch?v=sd-SMjAl0ks}, booktitle = {{{FOCS}}}, author = {Bennett, Huck and Golovnev, Alexander and {Stephens-Davidowitz}, Noah}, year = {2017} }

$ \newcommand{\eps}{ε} \newcommand{\problem}[1]{\ensuremath{\mathrm{#1}} } \newcommand{\CVP}{\problem{CVP}} \newcommand{§VP}{\problem{SVP}} \newcommand{\CVPP}{\problem{CVPP}} \newcommand{\ensuremath}[1]{#1} $For odd integers $p ≥ 1$ (and $p = ∞$), we show that the Closest Vector Problem in the $\ell_p$ norm ($\CVP_p$) over rank $n$ lattices cannot be solved in $2^{(1-\eps) n}$ time for any constant $\eps > 0$ unless the Strong Exponential Time Hypothesis (SETH) fails. We then extend this result to "almost all" values of $p ≥ 1$, not including the even integers. This comes tantalizingly close to settling the quantitative time complexity of the important special case of $\CVP_2$ (i.e., $\CVP$ in the Euclidean norm), for which a $2^{n +o(n)}$-time algorithm is known. In particular, our result applies for any $p = p(n) \neq 2$ that approaches $2$ as $n \to ∞$. We also show a similar SETH-hardness result for $§VP_∞$; hardness of approximating $\CVP_p$ to within some constant factor under the so-called Gap-ETH assumption; and other quantitative hardness results for $\CVP_p$ and $\CVPP_p$ for any $1 ≤ p < ∞$ under different assumptions......

Pseudorandomness of Ring-LWE for Any Ring and Modulus.
Peikert, C.; Regev, O.; and Stephens-Davidowitz, N.
In *STOC*, 2017.

Paper link bibtex abstract 68 downloads

Paper link bibtex abstract 68 downloads

@inproceedings{PRSPseudorandomnessRingLWE17, title = {Pseudorandomness of {{Ring}}-{{LWE}} for Any Ring and Modulus}, abstract = {We give a polynomial-time quantum reduction from worst-case (ideal) lattice problems directly to the decision version of (Ring-)LWE. This extends to decision all the worst-case hardness results that were previously known for the search version, for the same or even better parameters and with no algebraic restrictions on the modulus or number field. Indeed, our reduction is the first that works for decision Ring-LWE with any number field and any modulus.}, url = {https://eprint.iacr.org/2017/258}, booktitle = {{{STOC}}}, author = {Peikert, Chris and Regev, Oded and {Stephens-Davidowitz}, Noah}, year = {2017} }

We give a polynomial-time quantum reduction from worst-case (ideal) lattice problems directly to the decision version of (Ring-)LWE. This extends to decision all the worst-case hardness results that were previously known for the search version, for the same or even better parameters and with no algebraic restrictions on the modulus or number field. Indeed, our reduction is the first that works for decision Ring-LWE with any number field and any modulus.

A Reverse Minkowski Theorem.
Regev, O.; and Stephens-Davidowitz, N.
In *STOC*, 2017.

Paper tcs+ talk ias talk link bibtex abstract 91 downloads

Paper tcs+ talk ias talk link bibtex abstract 91 downloads

@inproceedings{RSReverseMinkowski17, title = {A Reverse {{Minkowski}} Theorem}, abstract = {$ \newcommand{\R}{\mathbb{R}} \newcommand{\lat}{\mathcal{L}} $We prove a conjecture due to Dadush, showing that if $\lat \subset \R^n$ is a lattice such that $\det(\lat') \ge 1$ for all sublattices $\lat' \subseteq \lat$, then \[ \sum_{y \in \lat} e^{-t^2 \| y\|^2} \le 3/2 \; , \] where $t := 10(\log n + 2)$. From this we also derive bounds on the number of short lattice vectors and on the covering radius.}, url = {http://arxiv.org/abs/1611.05979}, url_TCS+_talk = {http://www.youtube.com/watch?v=mgDNeg3U5TQ}, url_IAS_talk = {http://www.youtube.com/watch?v=9mvPxAKj5BU}, booktitle = {{{STOC}}}, author = {Regev, Oded and {Stephens-Davidowitz}, Noah}, year = {2017} }

$ \newcommand{\R}{ℝ} \newcommand{łat}{\mathcal{L}} $We prove a conjecture due to Dadush, showing that if $łat ⊂ \R^n$ is a lattice such that $\det(łat') \ge 1$ for all sublattices $łat' ⊆ łat$, then \[ ∑_y ∈ łat e^-t^2 \| y\|^2 łe 3/2 , \] where $t := 10(łog n + 2)$. From this we also derive bounds on the number of short lattice vectors and on the covering radius.

On the Gaussian Measure over Lattices.
Stephens-Davidowitz, N.
Ph.D. Thesis, New York University, 2017.

link bibtex

link bibtex

@phdthesis{NSDthesis, type = {{{PhD Thesis}}}, title = {On the {{Gaussian}} Measure over Lattices}, school = {New York University}, author = {{Stephens-Davidowitz}, Noah}, year = {2017} }

Implementing BP-Obfuscation Using Graph-Induced Encoding.
Halevi, S.; Halevi, T.; Shoup, V.; and Stephens-Davidowitz, N.
In 2017.

Paper link bibtex abstract 8 downloads

Paper link bibtex abstract 8 downloads

@inproceedings{HHSSImplementingBPObfuscation17, title = {Implementing {{BP}}-{{Obfuscation}} Using {{Graph}}-{{Induced Encoding}}}, abstract = {We implemented (a simplified version of) the branching-program obfuscator due to Gentry et al. (GGH15), which is itself a variation of the first obfuscation candidate by Garg et al. (GGHRSW13). To keep within the realm of feasibility, we had to give up on some aspects of the construction, specifically the ``multiplicative bundling'' factors that protect against mixed-input attacks. Hence our implementation can only support read-once branching programs. To be able to handle anything more than just toy problems, we developed a host of algorithmic and code-level optimizations. These include new variants of discrete Gaussian sampler and lattice trapdoor sampler, efficient matrix-manipulation routines, and many tradeoffs. We expect that these optimizations will find other uses in lattice-based cryptography beyond just obfuscation. Our implementation is the first obfuscation attempt using the GGH15 graded encoding scheme, offering performance advantages over other graded encoding methods when obfuscating finite-state machines with many states. In out most demanding setting, we were able to obfuscate programs with input length of 20 nibbles (80 bits) and over 100 states, which seems out of reach for prior implementations. Although further optimizations are surely possible, we do not expect any implementation of current schemes to be able to handle much larger parameters.}, url = {https://eprint.iacr.org/2017/104}, author = {Halevi, Shai and Halevi, Tzipora and Shoup, Victor and {Stephens-Davidowitz}, Noah}, year = {2017} }

We implemented (a simplified version of) the branching-program obfuscator due to Gentry et al. (GGH15), which is itself a variation of the first obfuscation candidate by Garg et al. (GGHRSW13). To keep within the realm of feasibility, we had to give up on some aspects of the construction, specifically the ``multiplicative bundling'' factors that protect against mixed-input attacks. Hence our implementation can only support read-once branching programs. To be able to handle anything more than just toy problems, we developed a host of algorithmic and code-level optimizations. These include new variants of discrete Gaussian sampler and lattice trapdoor sampler, efficient matrix-manipulation routines, and many tradeoffs. We expect that these optimizations will find other uses in lattice-based cryptography beyond just obfuscation. Our implementation is the first obfuscation attempt using the GGH15 graded encoding scheme, offering performance advantages over other graded encoding methods when obfuscating finite-state machines with many states. In out most demanding setting, we were able to obfuscate programs with input length of 20 nibbles (80 bits) and over 100 states, which seems out of reach for prior implementations. Although further optimizations are surely possible, we do not expect any implementation of current schemes to be able to handle much larger parameters.

2016
(4)

Search-to-Decision Reductions for Lattice Problems with Approximation Factors (Slightly) Greater than One.
Stephens-Davidowitz, N.
In *APPROX*, 2016.

Paper link bibtex abstract 44 downloads

Paper link bibtex abstract 44 downloads

@inproceedings{SteSearchtodecisionReductions16, title = {Search-to-Decision Reductions for Lattice Problems with Approximation Factors (Slightly) Greater than One}, abstract = {We show the first dimension-preserving search-to-decision reductions for approximate SVP and CVP. In particular, for any $\gamma \leq 1 + O(\log n/n)$, we obtain an efficient dimension-preserving reduction from $\gamma^{O(n/\log n)}$-SVP to $\gamma$-GapSVP and an efficient dimension-preserving reduction from $\gamma^{O(n)}$-CVP to $\gamma$-GapCVP. These results generalize the known equivalences of the search and decision versions of these problems in the exact case when $\gamma = 1$. For SVP, we actually obtain something slightly stronger than a search-to-decision reduction---we reduce $\gamma^{O(n/\log n)}$-SVP to $\gamma$-unique SVP, a potentially easier problem than $\gamma$-GapSVP.}, url = {http://arxiv.org/abs/1512.04138}, booktitle = {{{APPROX}}}, author = {{Stephens-Davidowitz}, Noah}, year = {2016} }

We show the first dimension-preserving search-to-decision reductions for approximate SVP and CVP. In particular, for any $γ ≤ 1 + O(łog n/n)$, we obtain an efficient dimension-preserving reduction from $γ^{O(n/łog n)}$-SVP to $γ$-GapSVP and an efficient dimension-preserving reduction from $γ^{O(n)}$-CVP to $γ$-GapCVP. These results generalize the known equivalences of the search and decision versions of these problems in the exact case when $γ = 1$. For SVP, we actually obtain something slightly stronger than a search-to-decision reduction—we reduce $γ^{O(n/łog n)}$-SVP to $γ$-unique SVP, a potentially easier problem than $γ$-GapSVP.

On the Lattice Distortion Problem.
Bennett, H.; Dadush, D.; and Stephens-Davidowitz, N.
In *ESA*, 2016.

Paper link bibtex abstract 14 downloads

Paper link bibtex abstract 14 downloads

@inproceedings{BDSLatticeDistortion16, title = {On the {{Lattice Distortion Problem}}}, abstract = {We introduce and study the \emph{Lattice Distortion Problem} (LDP). LDP asks how "similar" two lattices are. I.e., what is the minimal distortion of a linear bijection between the two lattices? LDP generalizes the Lattice Isomorphism Problem (the lattice analogue of Graph Isomorphism), which simply asks whether the minimal distortion is one. As our first contribution, we show that the distortion between any two lattices is approximated up to a $n^{O(\log n)}$ factor by a simple function of their successive minima. Our methods are constructive, allowing us to compute low-distortion mappings that are within a $2^{O(n \log \log n/\log n)}$ factor of optimal in polynomial time and within a $n^{O(\log n)}$ factor of optimal in singly exponential time. Our algorithms rely on a notion of basis reduction introduced by Seysen (Combinatorica 1993), which we show is intimately related to lattice distortion. Lastly, we show that LDP is NP-hard to approximate to within any constant factor (under randomized reductions), by a reduction from the Shortest Vector Problem.}, url = {http://arxiv.org/abs/1605.03613}, booktitle = {{{ESA}}}, author = {Bennett, Huck and Dadush, Daniel and {Stephens-Davidowitz}, Noah}, year = {2016} }

We introduce and study the \emphLattice Distortion Problem (LDP). LDP asks how "similar" two lattices are. I.e., what is the minimal distortion of a linear bijection between the two lattices? LDP generalizes the Lattice Isomorphism Problem (the lattice analogue of Graph Isomorphism), which simply asks whether the minimal distortion is one. As our first contribution, we show that the distortion between any two lattices is approximated up to a $n^{O(łog n)}$ factor by a simple function of their successive minima. Our methods are constructive, allowing us to compute low-distortion mappings that are within a $2^{O(n łog łog n/łog n)}$ factor of optimal in polynomial time and within a $n^{O(łog n)}$ factor of optimal in singly exponential time. Our algorithms rely on a notion of basis reduction introduced by Seysen (Combinatorica 1993), which we show is intimately related to lattice distortion. Lastly, we show that LDP is NP-hard to approximate to within any constant factor (under randomized reductions), by a reduction from the Shortest Vector Problem.

Discrete Gaussian Sampling Reduces to CVP and SVP.
Stephens-Davidowitz, N.
In *SODA*, 2016.

Paper link bibtex abstract 108 downloads

Paper link bibtex abstract 108 downloads

@inproceedings{SteDiscreteGaussian16, title = {Discrete {{Gaussian Sampling Reduces}} to {{CVP}} and {{SVP}}}, abstract = {The discrete Gaussian $D_{L- t, s}$ is the distribution that assigns to each vector $x$ in a shifted lattice $L - t$ probability proportional to $e^{-\pi \|x\|^2/s^2}$. It has long been an important tool in the study of lattices. More recently, algorithms for discrete Gaussian sampling (DGS) have found many applications in computer science. In particular, polynomial-time algorithms for DGS with very high parameters $s$ have found many uses in cryptography and in reductions between lattice problems. And, in the past year, Aggarwal, Dadush, Regev, and Stephens-Davidowitz showed $2^{n+o(n)}$-time algorithms for DGS with a much wider range of parameters and used them to obtain the current fastest known algorithms for the two most important lattice problems, the Shortest Vector Problem (SVP) and the Closest Vector Problem (CVP). Motivated by its increasing importance, we investigate the complexity of DGS itself and its relationship to CVP and SVP. Our first result is a polynomial-time dimension-preserving reduction from DGS to CVP. There is a simple reduction from CVP to DGS, so this shows that DGS is equivalent to CVP. Our second result, which we find to be more surprising, is a polynomial-time dimension-preserving reduction from centered DGS (the important special case when $ t = 0$) to SVP. In the other direction, there is a simple reduction from $\gamma$-approximate SVP for any $\gamma = \Omega(\sqrt{n/\log n})$, and we present some (relatively weak) evidence to suggest that this might be the best achievable approximation factor. We also show that our CVP result extends to a much wider class of distributions and even to other norms.}, url = {http://arxiv.org/abs/1506.07490}, booktitle = {{{SODA}}}, author = {{Stephens-Davidowitz}, Noah}, year = {2016} }

The discrete Gaussian $D_{L- t, s}$ is the distribution that assigns to each vector $x$ in a shifted lattice $L - t$ probability proportional to $e^{-π \|x\|^2/s^2}$. It has long been an important tool in the study of lattices. More recently, algorithms for discrete Gaussian sampling (DGS) have found many applications in computer science. In particular, polynomial-time algorithms for DGS with very high parameters $s$ have found many uses in cryptography and in reductions between lattice problems. And, in the past year, Aggarwal, Dadush, Regev, and Stephens-Davidowitz showed $2^{n+o(n)}$-time algorithms for DGS with a much wider range of parameters and used them to obtain the current fastest known algorithms for the two most important lattice problems, the Shortest Vector Problem (SVP) and the Closest Vector Problem (CVP). Motivated by its increasing importance, we investigate the complexity of DGS itself and its relationship to CVP and SVP. Our first result is a polynomial-time dimension-preserving reduction from DGS to CVP. There is a simple reduction from CVP to DGS, so this shows that DGS is equivalent to CVP. Our second result, which we find to be more surprising, is a polynomial-time dimension-preserving reduction from centered DGS (the important special case when $ t = 0$) to SVP. In the other direction, there is a simple reduction from $γ$-approximate SVP for any $γ = Ω(\sqrt{n/łog n})$, and we present some (relatively weak) evidence to suggest that this might be the best achievable approximation factor. We also show that our CVP result extends to a much wider class of distributions and even to other norms.

Message Transmission with Reverse Firewalls—Secure Communication on Corrupted Machines.
Dodis, Y.; Mironov, I.; and Stephens-Davidowitz, N.
In *CRYPTO*, 2016.

Paper crypto talk link bibtex abstract 11 downloads

Paper crypto talk link bibtex abstract 11 downloads

@inproceedings{DMSMessageTransmission16, title = {Message Transmission with {{Reverse Firewalls}}---Secure Communication on Corrupted Machines}, abstract = {Suppose Alice wishes to send a message to Bob privately over an untrusted channel. Cryptographers have developed a whole suite of tools to accomplish this task, with a wide variety of notions of security, setup assumptions, and running times. However, almost all prior work on this topic made a seemingly innocent assumption: that Alice has access to a trusted computer with a proper implementation of the protocol. The Snowden revelations show us that, in fact, powerful adversaries can and will corrupt users' machines in order to compromise their security. And, (presumably) accidental vulnerabilities are regularly found in popular cryptographic software, showing that users cannot even trust implementations that were created honestly. This leads to the following (seemingly absurd) question: ``Can Alice securely send a message to Bob even if she cannot trust her own computer?!'' Bellare, Paterson, and Rogaway recently studied this question. They show a strong lower bound that in particular rules out even semantically secure public-key encryption in their model. However, Mironov and Stephens-Davidowitz recently introduced a new framework for solving such problems: reverse firewalls. A secure reverse firewall is a third party that ``sits between Alice and the outside world'' and modifies her sent and received messages so that even if the her machine has been corrupted, Alice's security is still guaranteed. We show how to use reverse firewalls to sidestep the impossibility result of Bellare et al., and we achieve strong security guarantees in this extreme setting. Indeed, we find a rich structure of solutions that vary in efficiency, security, and setup assumptions, in close analogy with message transmission in the classical setting. Our strongest and most important result shows a protocol that achieves interactive, concurrent CCA-secure message transmission with a reverse firewall---i.e., CCA-secure message transmission on a possibly compromised machine! Surprisingly, this protocol is quite efficient and simple, requiring only four rounds and a small constant number of public-key operations for each party. It could easily be used in practice. Behind this result is a technical composition theorem that shows how key agreement with a sufficiently secure reverse firewall can be used to construct a message-transmission protocol with its own secure reverse firewall.}, url = {https://eprint.iacr.org/2015/548}, url_crypto_talk = {http://www.youtube.com/watch?v=2DOc-9u1EbQ}, booktitle = {{{CRYPTO}}}, author = {Dodis, Yevgeniy and Mironov, Ilya and {Stephens-Davidowitz}, Noah}, year = {2016} }

Suppose Alice wishes to send a message to Bob privately over an untrusted channel. Cryptographers have developed a whole suite of tools to accomplish this task, with a wide variety of notions of security, setup assumptions, and running times. However, almost all prior work on this topic made a seemingly innocent assumption: that Alice has access to a trusted computer with a proper implementation of the protocol. The Snowden revelations show us that, in fact, powerful adversaries can and will corrupt users' machines in order to compromise their security. And, (presumably) accidental vulnerabilities are regularly found in popular cryptographic software, showing that users cannot even trust implementations that were created honestly. This leads to the following (seemingly absurd) question: ``Can Alice securely send a message to Bob even if she cannot trust her own computer?!'' Bellare, Paterson, and Rogaway recently studied this question. They show a strong lower bound that in particular rules out even semantically secure public-key encryption in their model. However, Mironov and Stephens-Davidowitz recently introduced a new framework for solving such problems: reverse firewalls. A secure reverse firewall is a third party that ``sits between Alice and the outside world'' and modifies her sent and received messages so that even if the her machine has been corrupted, Alice's security is still guaranteed. We show how to use reverse firewalls to sidestep the impossibility result of Bellare et al., and we achieve strong security guarantees in this extreme setting. Indeed, we find a rich structure of solutions that vary in efficiency, security, and setup assumptions, in close analogy with message transmission in the classical setting. Our strongest and most important result shows a protocol that achieves interactive, concurrent CCA-secure message transmission with a reverse firewall—i.e., CCA-secure message transmission on a possibly compromised machine! Surprisingly, this protocol is quite efficient and simple, requiring only four rounds and a small constant number of public-key operations for each party. It could easily be used in practice. Behind this result is a technical composition theorem that shows how key agreement with a sufficiently secure reverse firewall can be used to construct a message-transmission protocol with its own secure reverse firewall.

2015
(4)

Solving the Closest Vector Problem in $2^n$ Time—The Discrete Gaussian Strikes Again!.
Aggarwal, D.; Dadush, D.; and Stephens-Davidowitz, N.
In *FOCS*, 2015.

Paper link bibtex abstract 18 downloads

Paper link bibtex abstract 18 downloads

@inproceedings{ADSSolvingClosest15, title = {Solving the {{Closest Vector Problem}} in $2^n$ Time\textemdash{}{{The}} Discrete {{Gaussian}} Strikes Again!}, abstract = {We give a $2^{n+o(n)}$-time and space randomized algorithm for solving the exact Closest Vector Problem (CVP) on $n$-dimensional Euclidean lattices. This improves on the previous fastest algorithm, the deterministic $\widetilde{O}(4^{n})$-time and $\widetilde{O}(2^{n})$-space algorithm of Micciancio and Voulgaris. We achieve our main result in three steps. First, we show how to modify the sampling algorithm from [ADRS15] to solve the problem of discrete Gaussian sampling over lattice shifts, $L- t$, with very low parameters. While the actual algorithm is a natural generalization of [ADRS15], the analysis uses substantial new ideas. This yields a $2^{n+o(n)}$-time algorithm for approximate CVP for any approximation factor $\gamma = 1+2^{-o(n/\log n)}$. Second, we show that the approximate closest vectors to a target vector $t$ can be grouped into "lower-dimensional clusters," and we use this to obtain a recursive reduction from exact CVP to a variant of approximate CVP that "behaves well with these clusters." Third, we show that our discrete Gaussian sampling algorithm can be used to solve this variant of approximate CVP. The analysis depends crucially on some new properties of the discrete Gaussian distribution and approximate closest vectors, which might be of independent interest.}, url = {http://arxiv.org/abs/1504.01995}, booktitle = {{{FOCS}}}, author = {Aggarwal, Divesh and Dadush, Daniel and {Stephens-Davidowitz}, Noah}, year = {2015} }

We give a $2^{n+o(n)}$-time and space randomized algorithm for solving the exact Closest Vector Problem (CVP) on $n$-dimensional Euclidean lattices. This improves on the previous fastest algorithm, the deterministic $\widetilde{O}(4^{n})$-time and $\widetilde{O}(2^{n})$-space algorithm of Micciancio and Voulgaris. We achieve our main result in three steps. First, we show how to modify the sampling algorithm from [ADRS15] to solve the problem of discrete Gaussian sampling over lattice shifts, $L- t$, with very low parameters. While the actual algorithm is a natural generalization of [ADRS15], the analysis uses substantial new ideas. This yields a $2^{n+o(n)}$-time algorithm for approximate CVP for any approximation factor $γ = 1+2^{-o(n/łog n)}$. Second, we show that the approximate closest vectors to a target vector $t$ can be grouped into "lower-dimensional clusters," and we use this to obtain a recursive reduction from exact CVP to a variant of approximate CVP that "behaves well with these clusters." Third, we show that our discrete Gaussian sampling algorithm can be used to solve this variant of approximate CVP. The analysis depends crucially on some new properties of the discrete Gaussian distribution and approximate closest vectors, which might be of independent interest.

Solving the Shortest Vector Problem in $2^n$ Time via Discrete Gaussian Sampling.
Aggarwal, D.; Dadush, D.; Regev, O.; and Stephens-Davidowitz, N.
In *STOC*, 2015.

Paper simons talk link bibtex abstract 48 downloads

Paper simons talk link bibtex abstract 48 downloads

@inproceedings{ADRSSolvingShortest15, title = {Solving the {{Shortest Vector Problem}} in $2^n$ Time via {{Discrete Gaussian Sampling}}}, abstract = {We give a randomized $2^{n+o(n)}$-time and space algorithm for solving the Shortest Vector Problem (SVP) on n-dimensional Euclidean lattices. This improves on the previous fastest algorithm: the deterministic $\widetilde{O}(4^n)$-time and $\widetilde{O}(2^n)$-space algorithm of Micciancio and Voulgaris (STOC 2010, SIAM J. Comp. 2013). In fact, we give a conceptually simple algorithm that solves the (in our opinion, even more interesting) problem of discrete Gaussian sampling (DGS). More specifically, we show how to sample $2^{n/2}$ vectors from the discrete Gaussian distribution at any parameter in $2^{n+o(n)}$ time and space. (Prior work only solved DGS for very large parameters.) Our SVP result then follows from a natural reduction from SVP to DGS. We also show that our DGS algorithm implies a $2^{n + o(n)}$-time algorithm that approximates the Closest Vector Problem to within a factor of $1.97$. In addition, we give a more refined algorithm for DGS above the so-called smoothing parameter of the lattice, which can generate $2^{n/2}$ discrete Gaussian samples in just $2^{n/2+o(n)}$ time and space. Among other things, this implies a $2^{n/2+o(n)}$-time and space algorithm for $1.93$-approximate decision SVP.}, url = {http://arxiv.org/abs/1412.7994}, url_Simons_talk = {http://www.youtube.com/watch?v=PWy0ZBRAUxA}, booktitle = {{{STOC}}}, author = {Aggarwal, Divesh and Dadush, Daniel and Regev, Oded and {Stephens-Davidowitz}, Noah}, year = {2015} }

We give a randomized $2^{n+o(n)}$-time and space algorithm for solving the Shortest Vector Problem (SVP) on n-dimensional Euclidean lattices. This improves on the previous fastest algorithm: the deterministic $\widetilde{O}(4^n)$-time and $\widetilde{O}(2^n)$-space algorithm of Micciancio and Voulgaris (STOC 2010, SIAM J. Comp. 2013). In fact, we give a conceptually simple algorithm that solves the (in our opinion, even more interesting) problem of discrete Gaussian sampling (DGS). More specifically, we show how to sample $2^{n/2}$ vectors from the discrete Gaussian distribution at any parameter in $2^{n+o(n)}$ time and space. (Prior work only solved DGS for very large parameters.) Our SVP result then follows from a natural reduction from SVP to DGS. We also show that our DGS algorithm implies a $2^{n + o(n)}$-time algorithm that approximates the Closest Vector Problem to within a factor of $1.97$. In addition, we give a more refined algorithm for DGS above the so-called smoothing parameter of the lattice, which can generate $2^{n/2}$ discrete Gaussian samples in just $2^{n/2+o(n)}$ time and space. Among other things, this implies a $2^{n/2+o(n)}$-time and space algorithm for $1.93$-approximate decision SVP.

Dimension-Preserving Reductions between Lattice Problems.
Stephens-Davidowitz, N.
2015.

Paper link bibtex abstract 98 downloads

Paper link bibtex abstract 98 downloads

@unpublished{SteDimensionpreservingReductions15, title = {Dimension-Preserving Reductions between Lattice Problems}, abstract = {Computational problems on lattices have found a remarkable number of applications in computer science. In particular, over the past twenty years, many strong cryptographic primitives have been constructed with their security based on the (worst-case) hardness of various lattice problems. Due to their importance, there has been much work towards understanding the relationship between these problems. For the parameters that typically interest us, the fastest known algorithms for lattice problems run in time that is exponential in the dimension of the lattice. Therefore, we are typically interested in reductions that preserve this dimension. (We actually relax this notion slightly and consider a reduction to be ``dimension-preserving'' if it increases the dimension by at most an additive constant.) We summarize many of the known results in this area.}, url = {http://noahsd.com/latticeproblems.pdf}, author = {{Stephens-Davidowitz}, Noah}, year = {2015} }

Computational problems on lattices have found a remarkable number of applications in computer science. In particular, over the past twenty years, many strong cryptographic primitives have been constructed with their security based on the (worst-case) hardness of various lattice problems. Due to their importance, there has been much work towards understanding the relationship between these problems. For the parameters that typically interest us, the fastest known algorithms for lattice problems run in time that is exponential in the dimension of the lattice. Therefore, we are typically interested in reductions that preserve this dimension. (We actually relax this notion slightly and consider a reduction to be ``dimension-preserving'' if it increases the dimension by at most an additive constant.) We summarize many of the known results in this area.

Cryptographic Reverse Firewalls.
Mironov, I.; and Stephens-Davidowitz, N.
In *Eurocrypt*, 2015.

Paper slides link bibtex abstract 35 downloads

Paper slides link bibtex abstract 35 downloads

@inproceedings{MSCryptographicReverse15, title = {Cryptographic {{Reverse Firewalls}}}, abstract = {Recent revelations by Edward Snowden show that a user's own hardware and software can be used against her in various ways (e.g., to leak her private information). And, a series of recent announcements has shown that widespread implementations of cryptographic software often contain serious bugs that cripple security. This motivates us to consider the following (seemingly absurd) question: How can we guarantee a user's security when she may be using a malfunctioning or arbitrarily compromised machine? To that end, we introduce the notion of a cryptographic reverse firewall (RF). Such a machine sits between the user's computer and the outside world, potentially modifying the messages that she sends and receives as she engages in a cryptographic protocol. A good reverse firewall accomplishes three things: (1) it maintains functionality, so that if the user's computer is working correctly, the RF will not break the functionality of the underlying protocol; (2) it preserves security, so that regardless of how the user's machine behaves, the presence of the RF will provide the same security guarantees as the properly implemented protocol; and (3) it resists exfiltration, so that regardless of how the user's machine behaves, the presence of the RF will prevent the machine from leaking any information to the outside world. Importantly, we do not model the firewall as a trusted party. It does not share any secrets with the user, and the protocol should be both secure and functional without the firewall (when it is implemented correctly). Our security definition for reverse firewalls depends on the security notion(s) of the underlying protocol. As such, our model generalizes much prior work and provides a general framework for building cryptographic schemes that remain secure when run on compromised machine. It is also a modern take on a line of work that received considerable attention in the 80s and 90s. We show that our definition is achievable by constructing a private function evaluation protocol with a secure reverse firewall for each party. Along the way, we design an oblivious transfer protocol that also has a secure RF for each party, and a rerandomizable garbled circuit that is both more efficient and more secure than previous constructions. Finally, we show how to convert $\backslash$emph\{any\} protocol into a protocol with an exfiltration-resistant reverse firewall for all parties. (In other words, we provide a generic way to prevent a tampered machine from leaking information to an eavesdropper via any protocol.)}, url = {https://eprint.iacr.org/2014/758}, url_slides = {http://www.cs.nyu.edu/crg/newSlides/02_23_16.pdf}, booktitle = {Eurocrypt}, author = {Mironov, Ilya and {Stephens-Davidowitz}, Noah}, year = {2015} }

Recent revelations by Edward Snowden show that a user's own hardware and software can be used against her in various ways (e.g., to leak her private information). And, a series of recent announcements has shown that widespread implementations of cryptographic software often contain serious bugs that cripple security. This motivates us to consider the following (seemingly absurd) question: How can we guarantee a user's security when she may be using a malfunctioning or arbitrarily compromised machine? To that end, we introduce the notion of a cryptographic reverse firewall (RF). Such a machine sits between the user's computer and the outside world, potentially modifying the messages that she sends and receives as she engages in a cryptographic protocol. A good reverse firewall accomplishes three things: (1) it maintains functionality, so that if the user's computer is working correctly, the RF will not break the functionality of the underlying protocol; (2) it preserves security, so that regardless of how the user's machine behaves, the presence of the RF will provide the same security guarantees as the properly implemented protocol; and (3) it resists exfiltration, so that regardless of how the user's machine behaves, the presence of the RF will prevent the machine from leaking any information to the outside world. Importantly, we do not model the firewall as a trusted party. It does not share any secrets with the user, and the protocol should be both secure and functional without the firewall (when it is implemented correctly). Our security definition for reverse firewalls depends on the security notion(s) of the underlying protocol. As such, our model generalizes much prior work and provides a general framework for building cryptographic schemes that remain secure when run on compromised machine. It is also a modern take on a line of work that received considerable attention in the 80s and 90s. We show that our definition is achievable by constructing a private function evaluation protocol with a secure reverse firewall for each party. Along the way, we design an oblivious transfer protocol that also has a secure RF for each party, and a rerandomizable garbled circuit that is both more efficient and more secure than previous constructions. Finally, we show how to convert $\$emph\any\ protocol into a protocol with an exfiltration-resistant reverse firewall for all parties. (In other words, we provide a generic way to prevent a tampered machine from leaking information to an eavesdropper via any protocol.)

2014
(2)

On the Closest Vector Problem with a Distance Guarantee.
Dadush, D.; Regev, O.; and Stephens-Davidowitz, N.
In *CCC*, 2014.

Paper link bibtex abstract 58 downloads

Paper link bibtex abstract 58 downloads

@inproceedings{DRSClosestVector14, title = {On the {{Closest Vector Problem}} with a Distance Guarantee}, abstract = {We present a substantially more efficient variant, both in terms of running time and size of preprocessing advice, of the algorithm by Liu, Lyubashevsky, and Micciancio for solving CVPP (the preprocessing version of the Closest Vector Problem, CVP) with a distance guarantee. For instance, for any $\alpha < 1/2$, our algorithm finds the (unique) closest lattice point for any target point whose distance from the lattice is at most $\alpha$ times the length of the shortest nonzero lattice vector, requires as preprocessing advice only $N \approx \widetilde{O}(n \exp(\alpha^2 n /(1-2\alpha)^2))$ vectors, and runs in time $\widetilde{O}(nN)$. As our second main contribution, we present reductions showing that it suffices to solve CVP, both in its plain and preprocessing versions, when the input target point is within some bounded distance of the lattice. The reductions are based on ideas due to Kannan and a recent sparsification technique due to Dadush and Kun. Combining our reductions with the LLM algorithm gives an approximation factor of $O(n/\sqrt{\log n})$ for search CVPP, improving on the previous best of $O(n^{1.5})$ due to Lagarias, Lenstra, and Schnorr. When combined with our improved algorithm we obtain, somewhat surprisingly, that only $O(n)$ vectors of preprocessing advice are sufficient to solve CVPP with (the only slightly worse) approximation factor of $O(n)$.}, url = {http://arxiv.org/abs/1409.8063}, booktitle = {{{CCC}}}, author = {Dadush, Daniel and Regev, Oded and {Stephens-Davidowitz}, Noah}, year = {2014} }

We present a substantially more efficient variant, both in terms of running time and size of preprocessing advice, of the algorithm by Liu, Lyubashevsky, and Micciancio for solving CVPP (the preprocessing version of the Closest Vector Problem, CVP) with a distance guarantee. For instance, for any $α < 1/2$, our algorithm finds the (unique) closest lattice point for any target point whose distance from the lattice is at most $α$ times the length of the shortest nonzero lattice vector, requires as preprocessing advice only $N ≈ \widetilde{O}(n \exp(α^2 n /(1-2α)^2))$ vectors, and runs in time $\widetilde{O}(nN)$. As our second main contribution, we present reductions showing that it suffices to solve CVP, both in its plain and preprocessing versions, when the input target point is within some bounded distance of the lattice. The reductions are based on ideas due to Kannan and a recent sparsification technique due to Dadush and Kun. Combining our reductions with the LLM algorithm gives an approximation factor of $O(n/\sqrt{łog n})$ for search CVPP, improving on the previous best of $O(n^{1.5})$ due to Lagarias, Lenstra, and Schnorr. When combined with our improved algorithm we obtain, somewhat surprisingly, that only $O(n)$ vectors of preprocessing advice are sufficient to solve CVPP with (the only slightly worse) approximation factor of $O(n)$.

How to Eat Your Entropy and Have It Too – Optimal Recovery Strategies for Compromised RNGs.
Dodis, Y.; Shamir, A.; Stephens-Davidowitz, N.; and Wichs, D.
In *CRYPTO*, 2014.

Paper link bibtex abstract 17 downloads

Paper link bibtex abstract 17 downloads

@inproceedings{DSSWHowEat14, title = {How to Eat Your Entropy and Have It Too -- Optimal Recovery Strategies for Compromised {{RNGs}}}, abstract = {Random number generators (RNGs) play a crucial role in many cryptographic schemes and protocols, but their security proof usually assumes that their internal state is initialized with truly random seeds and remains secret at all times. However, in many practical situations these are unrealistic assumptions: The seed is often gathered after a reset/reboot from low entropy external events such as the timing of manual key presses, and the state can be compromised at unknown points in time via side channels or penetration attacks. The usual remedy (used by all the major operating systems, including Windows, Linux, FreeBSD, MacOS, iOS, etc.) is to periodically replenish the internal state through an auxiliary input with additional randomness harvested from the environment. However, recovering from such attacks in a provably correct and computationally optimal way had remained an unsolved challenge so far. In this paper we formalize the problem of designing an efficient recovery mechanism from state compromise, by considering it as an online optimization problem. If we knew the timing of the last compromise and the amount of entropy gathered since then, we could stop producing any outputs until the state becomes truly random again. However, our challenge is to recover within a time proportional to this optimal solution even in the hardest (and most realistic) case in which (a) we know nothing about the timing of the last state compromise, and the amount of new entropy injected since then into the state, and (b) any premature production of outputs leads to the total loss of all the added entropy \{$\backslash$em used by the RNG\}, since the attacker can use brute force to enumerate all the possible low-entropy states. In other words, the challenge is to develop recovery mechanisms which are guaranteed to save the day as quickly as possible after a compromise we are not even aware of. The dilemma that we face is that any entropy used prematurely will be lost, and any entropy which is kept unused will delay the recovery. After developing our formal definitional framework for RNGs with inputs, we show how to construct a nearly optimal RNG which is secure in our model. Our technique is inspired by the design of the Fortuna RNG (which is a heuristic RNG construction that is currently used by Windows and comes without any formal analysis), but we non-trivially adapt it to our much stronger adversarial setting. Along the way, our formal treatment of Fortuna enables us to improve its entropy efficiency by almost a factor of two, and to show that our improved construction is essentially tight, by proving a rigorous lower bound on the possible efficiency of any recovery mechanism in our very general model of the problem.}, url = {https://eprint.iacr.org/2014/167}, booktitle = {{{CRYPTO}}}, author = {Dodis, Yevgeniy and Shamir, Adi and {Stephens-Davidowitz}, Noah and Wichs, Daniel}, year = {2014} }

Random number generators (RNGs) play a crucial role in many cryptographic schemes and protocols, but their security proof usually assumes that their internal state is initialized with truly random seeds and remains secret at all times. However, in many practical situations these are unrealistic assumptions: The seed is often gathered after a reset/reboot from low entropy external events such as the timing of manual key presses, and the state can be compromised at unknown points in time via side channels or penetration attacks. The usual remedy (used by all the major operating systems, including Windows, Linux, FreeBSD, MacOS, iOS, etc.) is to periodically replenish the internal state through an auxiliary input with additional randomness harvested from the environment. However, recovering from such attacks in a provably correct and computationally optimal way had remained an unsolved challenge so far. In this paper we formalize the problem of designing an efficient recovery mechanism from state compromise, by considering it as an online optimization problem. If we knew the timing of the last compromise and the amount of entropy gathered since then, we could stop producing any outputs until the state becomes truly random again. However, our challenge is to recover within a time proportional to this optimal solution even in the hardest (and most realistic) case in which (a) we know nothing about the timing of the last state compromise, and the amount of new entropy injected since then into the state, and (b) any premature production of outputs leads to the total loss of all the added entropy \$\$em used by the RNG\, since the attacker can use brute force to enumerate all the possible low-entropy states. In other words, the challenge is to develop recovery mechanisms which are guaranteed to save the day as quickly as possible after a compromise we are not even aware of. The dilemma that we face is that any entropy used prematurely will be lost, and any entropy which is kept unused will delay the recovery. After developing our formal definitional framework for RNGs with inputs, we show how to construct a nearly optimal RNG which is secure in our model. Our technique is inspired by the design of the Fortuna RNG (which is a heuristic RNG construction that is currently used by Windows and comes without any formal analysis), but we non-trivially adapt it to our much stronger adversarial setting. Along the way, our formal treatment of Fortuna enables us to improve its entropy efficiency by almost a factor of two, and to show that our improved construction is essentially tight, by proving a rigorous lower bound on the possible efficiency of any recovery mechanism in our very general model of the problem.

2007
(2)

The Cyclic Sieving Phenomenon on the Alternating Sign Matrices.
Stephens-Davidowitz, N.; and Cloninger, A.
2007.

Paper link bibtex abstract 29 downloads

Paper link bibtex abstract 29 downloads

@unpublished{SCCyclicSieving07, title = {The {{Cyclic Sieving Phenomenon}} on the {{Alternating Sign Matrices}}}, abstract = {We first present a previously unpublished result of Stanton that the group of order four generated by rotation by 90 degrees acting on alternating sign matrices exhibits the CSP with the obvious q-analogue of |ASM(n)|.}, url = {http://noahsd.com/papers/ASMCSP.pdf}, author = {{Stephens-Davidowitz}, Noah and Cloninger, Alex}, year = {2007} }

We first present a previously unpublished result of Stanton that the group of order four generated by rotation by 90 degrees acting on alternating sign matrices exhibits the CSP with the obvious q-analogue of |ASM(n)|.

On Link Patterns AND Alternating Sign Matrices.
Hong, F. C. K.; Cloninger, A.; and Stephens-Davidowitz, N.
2007.

Paper link bibtex abstract 17 downloads

Paper link bibtex abstract 17 downloads

@unpublished{HCSLinkPatterns07, title = {On Link Patterns {{AND Alternating Sign Matrices}}}, abstract = {We devise an algorithm to generate an alternating sign matrix(ASM) with the same blue and green link pattern on the circle. We also find a characterization of link patterns that are achieved by a unique ASM.}, url = {http://noahsd.com/papers/ASMLinks.pdf}, author = {Hong, Fraser Chiu Kim and Cloninger, Alex and {Stephens-Davidowitz}, Noah}, year = {2007} }

We devise an algorithm to generate an alternating sign matrix(ASM) with the same blue and green link pattern on the circle. We also find a characterization of link patterns that are achieved by a unique ASM.