Cryptography 9: Applications of Lattice Reduction and Lattice-based Problems


Posted on: December 9, 2025 | Reading time: 20 minutes | Word count: 4048 words | Author: Dang Duong Minh Nhat

Hello everyone! Today, we’re going to explore how lattices can be applied to solve several problems in cryptography. To get the most out of this post, you should first read Part 7, Part 8, and the other posts in this cryptography series. Alright, let’s get started!


Backgrounds

To better understand the problems discussed in this blog, we need to review some of the following theories.

Abelian Groups

Definition (Abelian Group). A group $\mathbb{G}$ is called abelian if it satisfies the commutative property. That is, for all $a, b \in \mathbb{G}$, we have:

$$ a \cdot b = b \cdot a. $$

Example. The group $(\mathbb{Z}, +)$, the set of integers under addition, is an abelian group because for all $a, b \in \mathbb{Z}$, we have:

$$ a + b = b + a. $$

Field Theory

Definition (Field). A field is a set $\mathbb{F}$, together with two operations: addition $(+)$ and multiplication $(\cdot)$, satisfying the following properties:

  • Additive Group: The set $(\mathbb{F}, +)$ is an abelian group with the identity element $0$.
  • Multiplicative Group: The set $(\mathbb{F} \setminus \left\lbrace 0\right\rbrace, \cdot)$ is an abelian group with the identity element $1 \neq 0$.
  • Distributivity: Multiplication distributes over addition; that is, for all $a, b, c \in \mathbb{F}$, we have $$ a \cdot (b + c) = a \cdot b + a \cdot c. $$

Example. Consider $\mathbb{Z}_{23}$, the set of integers modulo $23$, with addition “$+$” and multiplication “$\cdot$” defined modulo $23$. The identity elements for addition and multiplication in $(\mathbb{Z}_{23}, +, \cdot)$ are $0$ and $1$, respectively. Thus, $\mathbb{Z}_{23}$ is a field.

Applications of Lattice Reduction

If we can model a specific problem where the solution exists as a short vector in a certain lattice, we can solve it using lattice reduction. Before discussing the applications of lattice reduction in cryptography, we will look at an application in algebra. The problem considered here is reconstructing the minimal polynomial of an algebraic number given only an approximation of that number. This problem is proven to be solvable using lattice reduction.

Definition (Minimal Polynomial). Let $\alpha \in F$ where $F$ is a field. The minimal polynomial of $\alpha$ is the monic polynomial of the lowest degree in $F[x]$ such that $\alpha$ is a root.

We will use lattice reduction to find the minimal polynomial $f(x)$ of $\alpha = 7 + \sqrt[3]{5}$ given the approximation $7 + \sqrt[3]{5} \approx 8.70997594 = \beta$ up to 8 decimal places.

First, we suspect that $f$ has degree 3, so we write:

$$ f(x) = a_0 + a_1 x + a_2 x^2 + a_3 x^3, \quad a_i \in \mathbb{R}. $$

Since $f(\alpha) = 0$, it implies $f(\beta) \approx 0$. To work with integers, we multiply this expression by $10^8$ to get:

$$ 10^8 a_0 + \lfloor 10^8 \beta \rfloor a_1 + \lfloor 10^8 \beta^2 \rfloor a_2 + \lfloor 10^8 \beta^3 \rfloor a_3 \approx 0. $$

Now, consider the lattice with the basis (given by rows):

$$ \begin{aligned} \mathbf{B} & = \begin{bmatrix} 10^8 & 1 & 0 & 0 \\ \lfloor 10^8 \beta \rfloor & 0 & 1 & 0 \\ \lfloor 10^8 \beta^2 \rfloor & 0 & 0 & 1 \\ \lfloor 10^8 \beta^3 \rfloor & 0 & 0 & 0 \end{bmatrix} \\ & = \begin{bmatrix} 100000000 & 1 & 0 & 0 \\ 870997594 & 0 & 1 & 0 \\ 7586368087 & 0 & 0 & 1 \\ 66077083514 & 0 & 0 & 0 \end{bmatrix} \end{aligned} $$

Note that the target vector $\mathbf{x} = (c, a_0, a_1, a_2)$ containing the coefficients of the minimal polynomial lies in this lattice via the linear combination $\mathbf{t} = (a_0, a_1, a_2, 1)$. That is:

$$ \mathbf{tB} = \mathbf{x}. $$

Here, $c$ is an integer very close to $0$. Next, running the LLL algorithm on the basis $\mathbf{B}$ gives us the reduced basis $\mathbf{B}’$:

$$ \mathbf{B}’ = \begin{bmatrix} 5 & -348 & 147 & -21 \\ 438 & -75 & 116 & 188 \\ 109 & -214 & -563 & -159 \\ 357 & 136 & 220 & -419 \end{bmatrix}. $$

The first row gives us the shortest vector, and we recognize this as the solution because the first element $5$ is very close to $0$.

Reading the coefficients from this vector, we get:

$$ a_0 = -348, \quad a_1 = 147, \quad a_2 = -21. $$

Thus, the minimal polynomial of $\alpha = 7 + \sqrt[3]{5}$ is:

$$ f(x) = x^3 - 21x^2 + 147x - 348. $$

Lattice-based Problems

In this section, we study several problems that can be solved by modeling them as $\textbf{SVP}_\gamma$ or $\textbf{CVP}_\gamma$ problems. These problems can be characterized by their linearity; we call them lattice-based problems.

Coppersmith’s Method

In this section, we present a simple, informal overview of Coppersmith’s method in the univariate case.

Let $N$ be a composite number and $f(x) = x^d + \sum_{i=0}^{d-1} a_i x^i \in \mathbb{Z}[x]$ be a monic polynomial of degree $d$. Coppersmith’s method helps us find all integer roots $x_0$ of the equation $f(x_0) = 0 \pmod N$ such that $|x_0| < B$ for a bound $B$ depending on $N$ and $d$.

The main idea of the algorithm is to construct a polynomial $h(x)$ over the integers that also satisfies $h(x_0) = 0$. Since there are efficient algorithms to find roots of univariate polynomials over $\mathbb{Z}$, finding such a polynomial allows us to find the roots of the original polynomial. To achieve this, we observe that adding integer multiples of $g_i(x) = N x^i \in \mathbb{Z}[x]$ to $f$ results in polynomials sharing the same roots as $f$ modulo $N$. Thus, we consider the lattice generated by the rows of the following matrix $\mathbf{B}$:

$$ \mathbf{B} = \begin{bmatrix} N & & & & \\ & BN & & & \\ & & B^2 N & & \\ & & & \ddots & \\ & & & & B^{d-1} N \\ a_0 & a_1 B & a_2 B^2 & \cdots & a_{d-1} B^{d-1} & B^d \end{bmatrix} $$

The first rows encode the polynomials $g_i(Bx)$ for $0 \le i < d$, while the last row encodes $f(Bx)$. Consequently, each vector in this lattice represents a polynomial sharing the same roots as $f$ modulo $N$.

Crucially, if there exists a polynomial $h(x)$ in this lattice such that $|h(x_0)| < N$, then we have $h(x_0) = 0$ over $\mathbb{Z}$. To achieve this, the polynomial needs to have “small” coefficients; specifically, we desire that:

$$ |h_i x_0^i| \le |h_i B^i| < \frac{N}{d + 1} $$

for every coefficient $h_i$ of $h(x)$ (since $h(x)$ has $d+1$ coefficients). Therefore, we want the coefficients $h_i$ to be as small as possible. Since each vector in the lattice corresponds to a polynomial $h(Bx)$ sharing roots modulo $N$ with $f$, if we find a short vector in the lattice, the coefficients of the corresponding polynomial will be small, ensuring $|h(x_0)| < N \Rightarrow h(x_0) = 0$ in $\mathbb{Z}$. Thus, we use LLL to find a short vector—i.e., the desired polynomial.

Since $\mathbf{B}$ is a triangular matrix, we can easily calculate:

$$ \det(\mathcal{L}(\mathbf{B})) = \det(\mathbf{B}) = B^{d(d+1)/2} N^d $$

From Proposition 15 in Cryptography 8, with $\delta = 3/4$, we have:

$$ \begin{aligned} \lVert \mathbf{b}_1\rVert & \le \left( \frac{2}{\sqrt{4\delta - 1}} \right)^d \sqrt{d + 1} |\det(\mathcal{L})|^{1/(d + 1)} \\ & = 2^{d/2} \sqrt{d + 1} B^{d/2} N^{d/(d + 1)} \\ & = 2^{d/2} \sqrt{d + 1} B^{d/2} N N^{-1/(d+1)} \end{aligned} $$

Where $\mathbf{b}_1 = (b_0, \dots, b_d)$ is the first vector in the LLL-reduced basis. We choose this vector as the coefficients of the polynomial $h(Bx)$ and see that by setting:

$$ B < N^{\frac{2}{d(d+1)}} / \big(2(d + 1)^{3/d}\big) $$

we achieve the desired conditions on the coefficients $h_i$:

$$ |h_i B^i| = |b_i| \le \lVert \mathbf{b}_1\rVert < \frac{N}{d + 1} $$

Therefore, to find small roots of $f \bmod N$, we take:

$$ h(x) = b_0 + \frac{b_1}{B}x + \frac{b_2}{B^2}x^2 + \cdots + \frac{b_{d-1}}{B^{d-1}}x^{d-1} + x^d $$

and solve for the roots of $h(x)$ over $\mathbb{Z}$. Then, we check each root to ensure it is a root of $f \bmod N$.

Example

Let $N = 23 \cdot 29 = 667$ and $f(x) = x^2 + 6x + 352 \in \mathbb{Z}[x]$. Note that $f$ has a small root $x_0 = 15$ modulo $N$ but does not have that root over $\mathbb{Z}$. That is, $f(15) = 0 \pmod N$, but $f(15) \ne 0$. We use Coppersmith’s method as above to find this small root. Choose $B = 20$ and construct the lattice generated by the rows of matrix $\mathbf{B}$:

$$ \begin{aligned} \mathbf{B} & = \begin{bmatrix} 667 & 0 & 0 \\ 0 & 20 \cdot 667 & 0 \\ 352 & 6 \cdot 20 & 20^2 \end{bmatrix} \\ & = \begin{bmatrix} 667 & 0 & 0 \\ 0 & 13340 & 0 \\ 352 & 120 & 400 \end{bmatrix} \end{aligned} $$

Running the LLL algorithm on this basis yields the reduced basis $\mathbf{B}’$:

$$ \mathbf{B}’ = \begin{bmatrix} -315 & 120 & 400 \\ 352 & 120 & 400 \\ 167 & 12260 & -3600 \end{bmatrix} $$

We take the first row and interpret it as the coefficients of the polynomial $h(Bx)$:

$$ h(Bx) = 400x^2 + 120x - 315 $$

Therefore:

$$ h(x) = \frac{400}{20^2}x^2 + \frac{120}{20}x - 315 = x^2 + 6x - 315 $$

Solving $h(x)$ (using the quadratic formula or Newton’s method), we get the root $x_0 = 15$. Note that we also obtain the root $x_0 = -21$, which has an absolute value greater than $B$. In practice, Coppersmith’s method often performs better than theoretical bounds suggest.

The Knapsack Problem

The knapsack problem is a famous computational problem belonging to the NP-complete class, and it was once used as a trapdoor function in some public-key cryptosystems. The most common version of the knapsack problem in cryptography and cryptanalysis is the subset sum problem, where we need to find a subset of a given set of numbers such that their sum equals a specific target value. In this section, we will study a special case of the subset sum problem, as well as the modular subset sum problem and its generalizations.

Low-density Subset Sum Problems

Definition (Subset Sum Problem). Given positive integers $a_1, \ldots, a_n$ (called weights) and a target integer $s$, find a subset of $a_i$ such that their sum equals $s$. That is, find $e_1, \ldots, e_n$ with $e_i \in \left\lbrace 0, 1\right\rbrace$ such that:

$$ \sum_{i=1}^{n} e_i a_i = s $$

Many cryptosystems based on the subset sum problem have been proven insecure due to algorithms capable of solving special cases of low-density subset sum problems.

The density of a set of weights $a_1, \ldots, a_n$ is defined as:

$$ d = \frac{n}{\log_2 \max(a_i)} $$

Lagarias and Odlyzko proposed the LO algorithm, which can solve subset sum problems when $d < 0.6463$ given access to an SVP oracle (Shortest Vector Problem oracle). Similarly, Coster et al. proposed the CJLOSS algorithm, a slight improvement on LO, raising the bound to $d < 0.9408$. Both algorithms are based on lattice reduction and are proven to be quite practical.

The strategy is to construct a lattice containing a vector that represents the $e_i$’s as a short vector. A simple lattice following this idea is generated by the rows of the basis matrix:

$$ \mathbf{B} = \begin{bmatrix} 1 & & & & a_1 \\ & 1 & & & a_2 \\ & & \ddots & & \vdots \\ & & & 1 & a_n \\ & & & & s \end{bmatrix} $$

Note that the linear combination $\mathbf{t} = (e_1, \ldots, e_n, -1)$ yields the (short) vector $\mathbf{x}_1 = (e_1, \ldots, e_n, 0)$, meaning $\mathbf{tB} = \mathbf{x}_1$. Thus, we can expect that lattice reduction will help us find this vector. The CJLOSS algorithm uses a slightly different lattice, with basis $\mathbf{B}’$ given by:

$$ \mathbf{B}’ = \begin{bmatrix} 1 & & & & N a_1 \\ & 1 & & & N a_2 \\ & & \ddots & & \vdots \\ & & & 1 & N a_n \\ \frac{1}{2} & \frac{1}{2} & \cdots & \frac{1}{2} & N s \end{bmatrix} $$

where $N > \sqrt{n}$ is an integer. In this case, the same linear combination $\mathbf{t} = (e_1, \ldots, e_n, -1)$ yields the vector:

$$ \mathbf{x}_2 = (e_1 - \tfrac{1}{2}, \ldots, e_n - \tfrac{1}{2}, 0) $$

thus $\lVert \mathbf{x}_2 \rVert = \tfrac{1}{2}\sqrt{n}$. The probability $P$ that $\mathbf{x}_2$ is not the unique shortest vector in the lattice is bounded by:

$$ P \le n(4n\sqrt{n} + 1)\frac{2^{c_0 n}}{\max(a_i)} $$

with $c_0 = 1.0628\ldots$. This upper bound approaches $0$ when $\max(a_i) > 2^{c_0 n}$, and therefore:

$$ \begin{aligned} & \max(a_i) > 2^{c_0 n} \\ \implies & \log_2 \max(a_i) > c_0 n \\ \implies & \frac{1}{c_0} > \frac{n}{\log_2 \max(a_i)} \\ \implies & d < \frac{1}{c_0} \\ \implies & d < 0.9408\ldots \end{aligned} $$

Thus, most subset sum problems with density $d < 0.9408$ can be solved efficiently given an SVP oracle. In practice, such an oracle does not exist, but the LLL algorithm often finds a vector short enough to solve the subset sum problem. Moreover, the theoretical bound $d < 0.9408$ is not absolute—higher density instances can often be solved in practice.

Example. Let $(a_1, a_2, a_3, a_4, a_5, a_6) = (83, 59, 47, 81, 76, 51)$ with $n = 6$ and target $s = 291$. The density of this problem is:

$$ d = \frac{n}{\log_2 \max(a_i)} = \frac{6}{6.375} = 0.9412 $$

Although this density is slightly higher than the theoretical limit, we still apply the CJLOSS algorithm to test it. With $N = 3$, we construct the lattice generated by the rows of $\mathbf{B}$ (as above). After running the LLL algorithm, we obtain the reduced basis $\mathbf{B}’$ and the first vector:

$$ \mathbf{b}_1 = \left(-\frac{1}{2}, \frac{1}{2}, \frac{1}{2}, -\frac{1}{2}, -\frac{1}{2}, -\frac{1}{2}, 0\right) $$

which is the vector $\mathbf{x} = (e_1 - \tfrac{1}{2}, \ldots, e_6 - \tfrac{1}{2}, 0)$ (or $-\mathbf{x}$). We deduce the $e_i$’s by calculating:

$$ (e_1, e_2, e_3, e_4, e_5, e_6) = \left(\frac{1}{2} - b_1, \ldots, \frac{1}{2} - b_6\right) = (1, 0, 0, 1, 1, 1) $$

and clearly:

$$ \sum_{i=1}^{n} e_i a_i = s $$

The above method can be extended to other variants such as:

  • Multiple Subset Sum Problem
  • Modular Subset Sum Problem
  • Multiple Modular Subset Sum Problem

Definition (Multiple Subset Sum Problem). Given positive integers $a_{1,1}, \ldots, a_{k,n}$ (weights) and target integers $s_1, \ldots, s_k$, find $e_1, \ldots, e_n$ with $e_i \in \left\lbrace 0, 1\right\rbrace$ such that:

$$ \sum_{i=1}^{n} e_i a_{j,i} = s_j \quad \text{for all } 1 \le j \le k $$

The density of the multiple subset sum problem is defined as:

$$ d = \frac{n}{k \cdot \log_2 \max(a_{j,i})} $$

The multiple subset sum problem can be solved with the same density bound $d < 0.9408$ using a lattice with basis $\mathbf{B}$ given by:

$$ \mathbf{B} = \begin{bmatrix} 1 & & & & 0 & N a_{1,1} & N a_{2,1} & \cdots & N a_{k,1} \\ & 1 & & & 0 & N a_{1,2} & N a_{2,2} & \cdots & N a_{k,2} \\ & & \ddots & & \vdots & \vdots & & \vdots & \vdots \\ & & & 1 & 0 & N a_{1,n} & N a_{2,n} & \cdots & N a_{k,n} \\ \frac{1}{2} & \frac{1}{2} & \cdots & \frac{1}{2} & \frac{1}{2} & N s_1 & N s_2 & \cdots & N s_k \end{bmatrix} $$

with $N > \sqrt{\frac{n+1}{4}}$. As before, the linear combination $(e_1, \ldots, e_n, -1)$ yields the short vector:

$$ \mathbf{x} = (e_1 - \tfrac{1}{2}, \ldots, e_n - \tfrac{1}{2}, -\tfrac{1}{2}, 0, \ldots, 0) $$

and thus, we expect lattice reduction to find this target vector.

Definition (Modular Subset Sum Problem). Given positive integers $a_1, \ldots, a_n$ (weights), a target integer $s$, and a modulus $M$, find $e_1, \ldots, e_n$ with $e_i \in \left\lbrace 0, 1\right\rbrace$ such that:

$$ \sum_{i=1}^{n} e_i a_i \equiv s \pmod{M} $$

Definition (Multiple Modular Subset Sum Problem). Given positive integers $a_{1,1}, \ldots, a_{k,n}$ (weights), target integers $s_1, \ldots, s_k$, and a modulus $M$, find $e_1, \ldots, e_n$ with $e_i \in \left\lbrace 0, 1\right\rbrace$ such that:

$$ \sum_{i=1}^{n} e_i a_{j,i} \equiv s_j \pmod{M} $$

for all $1 \le j \le k$. The density of the multiple modular subset sum problem is defined as:

$$ d = \frac{n}{k \cdot \log_2 M} $$

Note that the standard modular subset sum problem is just a special case where $k = 1$. The multiple modular subset sum problem can be solved when $d < 0.9408$ and

$$ k = \mathcal{o}\left(\frac{n}{\log_2((n+1)\sqrt{n+1})}\right) $$

using a lattice with basis $\mathbf{B}’$ given by:

$$ \mathbf{B}’ = \begin{bmatrix} 1 & & & & 0 & N a_{1,1} & N a_{2,1} & \cdots & N a_{k,1} \\ & 1 & && 0 & N a_{1,2} & N a_{2,2} & \cdots & N a_{k,2} \\ & & \ddots & & \vdots & \vdots & & \vdots & \vdots \\ & & & 1 & 0 & N a_{1,n} & N a_{2,n} & \cdots & N a_{k,n} \\ & & & & & N M \\ & & & & & & N M \\ & & & & & & & \ddots \\ & & & & & & & & N M \\ \frac{1}{2} & \frac{1}{2} & \cdots & \frac{1}{2} & \frac{1}{2} & N s_1 & N s_2 & \cdots & N s_k \end{bmatrix} $$

To see why the target vector lies in this lattice, we rewrite the modular equations:

$$ \sum_{i=1}^{n} e_i a_{j,i} = s_j \pmod{M} $$

as

$$ \sum_{i=1}^{n} e_i a_{j,i} = s_j + \ell_j M $$

and observe that the linear combination $(e_1, \ldots, e_n, -\ell_1, \ldots, -\ell_k, -1)$ yields the target vector:

$$ \mathbf{x} = (e_1 - \tfrac{1}{2}, \ldots, e_n - \tfrac{1}{2}, -\tfrac{1}{2}, 0, \ldots, 0) $$

The Hidden Number Problem

The Hidden Number Problem (HNP) was introduced for the purpose of proving results related to the bit security of the Diffie–Hellman key exchange protocol. Broadly speaking, HNP deals with recovering a secret “hidden” number when we have partial information about its linear relations. Consequently, HNP has naturally become useful in cryptography, specifically in side-channel attacks. In this section, we will study the hidden number problem as well as its extended version.

The original formulation of HNP is to find a secret integer $\alpha$ modulo $p$ (where $p$ is a known prime), given the most significant bits (MSB) of a number of values $t_i \alpha \bmod p$, where $t_i$ are known random numbers. We will follow a reformulation, which is a slight variant, modeling the problem as finding the solution to a system of linear equations.

Definition (Hidden Number Problem). Let $p$ be a prime and $\alpha \in [1, p - 1]$ be a secret integer. Recover $\alpha$ given $m$ pairs of integers $\left\lbrace (t_i, a_i)\right\rbrace_{i=1}^m$ such that:

$$ \beta_i - t_i \alpha + a_i \equiv 0 \pmod{p} $$

where $\beta_i$ are unknown values satisfying $|\beta_i| < B$ for a constant $B < p$. With suitable parameters, HNP can be solved by reduction to the Closest Vector Problem (CVP). Consider the matrix with basis $\mathbf{B}$ given by:

$$ \mathbf{B} = \begin{bmatrix} p & & & & \\ & p & & & \\ & & \ddots & & \\ & & & p & \\ t_1 & t_2 & \cdots & t_m & 1/p \end{bmatrix} $$

By rewriting the HNP equation as $\beta_i + a_i = t_i \alpha + k_i p$ with $k_i$ being integers, we see that the linear combination $\mathbf{x} = (k_1, \ldots, k_m, \alpha)$ generates the lattice vector:

$$ \mathbf{xB} = (\beta_1 + a_1, \ldots, \beta_m + a_m, \alpha/p) $$

Let $\mathbf{t} = (a_1, \ldots, a_m, 0)$ and $\mathbf{u} = (\beta_1, \ldots, \beta_m, \alpha/p)$, we have $\mathbf{xB - t = u}$. The length of $\mathbf{u}$ is bounded by $\sqrt{m+1}B$, while the lattice determinant is $p^{m-1}$. Therefore, we can expect that a CVP approximation algorithm will find vector $\mathbf{u}$, from which we can read the secret value $\alpha$ by multiplying the last element by $p$. It has been shown that this method succeeds using Babai’s algorithm combined with LLL, if:

$$ m = 2 \left\lceil \sqrt{\log p} \right\rceil $$

and

$$ B \le \frac{p}{2^k} \quad \text{with} \quad k = \left\lceil \sqrt{\log p} \right\rceil + \left\lceil \log \log p \right\rceil $$

In practice, HNP can be solved with looser parameter conditions, especially in small dimensions and with optimizations like recentering. Furthermore, studies suggest that the SVP (Shortest Vector Problem) method using Kannan’s embedding is often more effective than the CVP method. With the SVP method, we embed the target vector of the CVP into the lattice basis as an extra row, obtaining basis $\mathbf{B}’$:

$$ \mathbf{B}’ = \begin{bmatrix} p & & & & \\ & p & & & \\ & & \ddots & & \\ & & & p & \\ t_1 & t_2 & \cdots & t_m & B/p \\ a_1 & a_2 & \cdots & a_m & & B \end{bmatrix} $$

This lattice contains the vector:

$$ \mathbf{u}’ = (\beta_1, \ldots, \beta_m, \alpha B/p, -B) $$

generated by the linear combination $(k_1, \ldots, k_m, \alpha, -1)$. We have $\lVert \mathbf{u}’\rVert < \sqrt{m+2}B$, so we have a high probability of finding this short vector among the basis vectors of the LLL-reduced basis. Notably, a vector shorter than $(0, \ldots, 0, B, 0)$ also belongs to this lattice, generated by the combination $( -t_1, \ldots, -t_m, p, 0 )$, so $\mathbf{u}’$ is likely to be the second vector in the basis after LLL reduction.

Example

Let $p = 401$ and $(t_1, t_2, t_3) = (143, 293, 304)$. We will use the SVP method to solve the HNP and recover the secret integer $\alpha = 309$. Suppose we have:

$$ \beta_i - t_i \alpha + a_i \equiv 0 \pmod{p}, \quad i \in \left\lbrace 1, 2, 3\right\rbrace $$

with $(a_1, a_2, a_3) = (62, 300, 86)$ and $\beta_i < B = 20$. We construct the matrix generated by the rows of $\mathbf{B}$:

$$ \begin{aligned} \mathbf{B} & = \begin{bmatrix} p & 0 & 0 & 0 & 0 \\ 0 & p & 0 & 0 & 0 \\ 0 & 0 & p & 0 & 0 \\ t_1 & t_2 & t_3 & B/p & 0 \\ a_1 & a_2 & a_3 & 0 & B \end{bmatrix} \\ & = \begin{bmatrix} 401 & 0 & 0 & 0 & 0 \\ 0 & 401 & 0 & 0 & 0 \\ 0 & 0 & 401 & 0 & 0 \\ 143 & 293 & 304 & 20/401 & 0 \\ 62 & 300 & 86 & 0 & 20 \end{bmatrix} \end{aligned} $$

Running the LLL algorithm on this basis, we obtain the LLL-reduced basis $\mathbf{B}’$:

$$ \mathbf{B}’ = \begin{bmatrix} 0 & 0 & 0 & 20 & 0 \\ -15 & -12 & -16 & \frac{1840}{401} & 20 \\ 24 & -5 & -6 & -\frac{1800}{401} & 20 \\ 6 & -42 & -5 & -\frac{1440}{401} & -40 \\ -11 & -1 & 57 & -\frac{3880}{401} & 20 \end{bmatrix} $$

As expected, the first basis vector is $(0, 0, 0, B, 0)$. To recover $\alpha$, we examine the other basis vectors. Our target vector is $\mathbf{u} = (\beta_1, \beta_2, \beta_3, \alpha B/p, -B)$, and although no basis vector has the last element $-20$, $-\mathbf{u}$ has the same length and may be in the basis. Indeed, $-\mathbf{u}$ is found in the second basis vector, and it satisfies $\beta_i < B$ for $i \in \left\lbrace 1, 2, 3\right\rbrace$. Thus, we can read $\alpha$ from the penultimate element of $-\mathbf{u}$:

$$ \alpha = -\left( \frac{1840}{401} \right) \cdot \frac{401}{20} \bmod p = 309 $$


Through this post, we have seen that lattice reduction (specifically the LLL algorithm) is not merely a theoretical concept but a versatile tool with profound implications in cryptanalysis. By transforming complex constraints into linear relations within a lattice, we can reduce hard problems—such as finding small roots modulo $N$ (Coppersmith), solving knapsack-type cryptosystems, or recovering secret keys from side-channel leakage (HNP)—into the problem of finding short vectors (SVP/CVP). Understanding these attacks is crucial, as they form the foundation for assessing the security parameters of modern cryptographic primitives and pave the way for understanding Lattice-based Cryptography in the post-quantum era.

References

  1. Joseph Surin and Shaanan Cohney. (2023). A Gentle Tutorial for Lattice-Based Cryptanalysis. Retrieved from https://eprint.iacr.org/2023/032.pdf

Connect with Me

Connect with me on Facebook, LinkedIn, via email at dangduongminhnhat2003@gmail.com, GitHub, or by phone at +84 829 258 815.