Posted on: September 10, 2025 | Reading time: 8 minutes | Word count: 1701 words | Author: Dang Duong Minh Nhat
Hello everyone! Today, I will introduce Group Theory and Vector Spaces. You can also check out other blogs in this series. Alright, let’s get started!
Group Theory
Definition 1 (Group)
A group is a set $\mathbb{G}$ together with a binary operation $\cdot$, satisfying the following four properties:
- Closure: If $a, b \in \mathbb{G}$, then $a \cdot b \in \mathbb{G}$.
- Associativity: For all $a, b, c \in \mathbb{G}$, we have $(a \cdot b) \cdot c = a \cdot (b \cdot c).$
- Identity Element: There exists an element $e \in \mathbb{G}$ (called the identity element) such that for every $a \in \mathbb{G}$, we have $a \cdot e = e \cdot a = a.$
- Inverse Element: For each $a \in \mathbb{G}$, there exists a unique element $b \in \mathbb{G}$ (called the inverse of $a$) such that $a \cdot b = b \cdot a = e.$ This element $b$ is denoted by $a^{-1}$.
Example
For $n \geq 2$, the set $\mathbb{Z}_n = \left\lbrace 0, 1, \dots, n - 1\right\rbrace$ together with addition modulo $n$ forms a group, denoted $\left( \mathbb{Z}_n, + \right)$. The identity element of the group is $0$. For each $a \neq 0$, the inverse of $a$ is $n - a$.
Definition 2 (Order of a Group)
The order of a group is the number of elements in the group. For a finite group, this is the cardinality of the set $\mathbb{G}$.
Example
The group $(\mathbb{Z}_{23}, +)$ consists of the integers $\left\lbrace 0, 1, 2, \dots, 22\right\rbrace$. Since it has $23$ elements, the order of the group is $23$.
Definition 3 (Subgroup)
Let $\mathbb{G}$ be a group. A subset $\mathbb{H}$ of $\mathbb{G}$ is called a subgroup of $\mathbb{G}$ if $\mathbb{H}$ itself is a group under the operation inherited from $\mathbb{G}$.
For a subset $\mathbb{H}$ to be a subgroup of $\mathbb{G}$, it must satisfy the following conditions:
- Closure: If $a, b \in \mathbb{H}$, then $a \cdot b \in \mathbb{H}$.
- Identity Element: The identity element $e$ of $\mathbb{G}$ belongs to $\mathbb{H}$.
- Inverse Element: If $a \in \mathbb{H}$, then $a^{-1} \in \mathbb{H}$.
- We do not need to check for associativity in $H$, as this property is directly inherited from $\mathbb{G}$.
If $\mathbb{H}$ is a subgroup of $\mathbb{G}$, we write $\mathbb{H} \leq \mathbb{G}$. Furthermore, if $\mathbb{H} \neq \mathbb{G}$, we call $\mathbb{H}$ a proper subgroup of $\mathbb{G}$.
Trivial Subgroups
Let $\mathbb{G}$ be a group. Then $\left\lbrace e\right\rbrace$ and $\mathbb{G}$ are its two trivial subgroups.
Example
$( \mathbb{Z}, + ) \leq ( \mathbb{Q}, + ) \leq ( \mathbb{R}, + ) \leq ( \mathbb{C}, + )$ Each group on the left is a subgroup of any group to its right.
Vector Spaces
A vector space $V$ is a subset of $\mathbb{R}^m$ with the property that:
$$ \alpha_1 \mathbf{v_1} + \alpha_2 \mathbf{v_2} \in V \quad \text{for all } \mathbf{v_1}, \mathbf{v_2} \in V \text{ and all } \alpha_1, \alpha_2 \in \mathbb{R}. $$
In other words, a vector space is a subset of $\mathbb{R}^m$ that is closed under addition and scalar multiplication by elements of $\mathbb{R}$.
Linear Combinations
Let $\mathbf{v_1}, \mathbf{v_2}, \ldots, \mathbf{v_k} \in V$. A linear combination of $\mathbf{v_1}, \mathbf{v_2}, \ldots, \mathbf{v_k}$ is a vector of the form:
$$ \mathbf{w} = \alpha_1 \mathbf{v_1} + \alpha_2 \mathbf{v_2} + \cdots + \alpha_k \mathbf{v_k} \quad \text{with } \alpha_1, \ldots, \alpha_k \in \mathbb{R}. $$
The set of all such linear combinations:
$$ \left\lbrace \alpha_1 \mathbf{v_1} + \alpha_2 \mathbf{v_2} + \cdots + \alpha_k \mathbf{v_k} : \alpha_1, \ldots, \alpha_k \in \mathbb{R}\right\rbrace, $$
is called the span of ${\mathbf{v_1}, \ldots, \mathbf{v_k}}$.
Linear Independence
A set of vectors $\mathbf{v_1}, \mathbf{v_2}, \ldots, \mathbf{v_k} \in V$ is called (linearly) independent if the only way to have
$$ \alpha_1 \mathbf{v_1} + \alpha_2 \mathbf{v_2} + \cdots + \alpha_k \mathbf{v_k} = 0 \tag{1} $$
is when $\alpha_1 = \alpha_2 = \cdots = \alpha_k = 0$. Conversely, if $(1)$ holds with at least one $\alpha_i \ne 0$, then the set is (linearly) dependent.
Basis
A basis for $V$ is a set of linearly independent vectors $\mathbf{v_1}, \ldots, \mathbf{v_n}$ whose span is $V$. This is equivalent to saying that every vector $\mathbf{w} \in V$ can be written as
$$ \mathbf{w} = \alpha_1 \mathbf{v_1} + \alpha_2 \mathbf{v_2} + \cdots + \alpha_n \mathbf{v_n}, $$
for a unique choice of $\alpha_1, \ldots, \alpha_n \in \mathbb{R}$.
Proposition 1
Let $V \subset \mathbb{R}^m$ be a vector space:
- There exists at least one basis for $V$.
- Every basis of $V$ has the same number of elements. The number of elements in a basis is called the dimension of $V$.
- Let $\mathbf{v_1}, \ldots, \mathbf{v_n}$ be a basis for $V$ and $\mathbf{w_1}, \ldots, \mathbf{w_n}$ be another set of $n$ vectors in $V$. Write each $\mathbf{w_j}$ as a linear combination of the $\mathbf{v_i}$’s:
$$ \mathbf{w_1} = \alpha_{11} \mathbf{v_1} + \alpha_{12} \mathbf{v_2} + \cdots + \alpha_{1n} \mathbf{v_n}, $$
$$ \mathbf{w_2} = \alpha_{21} \mathbf{v_1} + \alpha_{22} \mathbf{v_2} + \cdots + \alpha_{2n} \mathbf{v_n}, $$
$$ \vdots $$
$$ \mathbf{w_n} = \alpha_{n1} \mathbf{v_1} + \alpha_{n2} \mathbf{v_2} + \cdots + \alpha_{nn} \mathbf{v_n}. $$
Then, $\mathbf{w_1}, \ldots, \mathbf{w_n}$ is also a basis for $V$ if and only if the determinant of the matrix
$$ \begin{pmatrix} \alpha_{11} & \alpha_{12} & \cdots & \alpha_{1n} \\ \alpha_{21} & \alpha_{22} & \cdots & \alpha_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ \alpha_{n1} & \alpha_{n2} & \cdots & \alpha_{nn} \end{pmatrix} \ne 0. $$
Dot Product and Euclidean Norm
Let $\mathbf{v}, \mathbf{w} \in V \subset \mathbb{R}^m$ with coordinates:
$$ \mathbf{v} = (x_1, x_2, \ldots, x_m), \quad \mathbf{w} = (y_1, y_2, \ldots, y_m). $$
- The dot product (or inner product) of $v$ and $w$:
$$ \mathbf{v} \cdot \mathbf{w} = x_1 y_1 + x_2 y_2 + \cdots + x_m y_m. $$
We say that $\mathbf{v}$ and $\mathbf{w}$ are orthogonal if $\mathbf{v} \cdot \mathbf{w} = 0$.
- The length (or Euclidean norm) of $\mathbf{v}$:
$$ \lVert \mathbf{v} \rVert = \sqrt{x_1^2 + x_2^2 + \cdots + x_m^2}. $$
We have the relation:
$$ \mathbf{v} \cdot \mathbf{v} = \lVert \mathbf{v} \rVert^2. $$
Proposition 2
Let $\mathbf{v}, \mathbf{w} \in V \subset \mathbb{R}^m$:
(a) Let $\theta$ be the angle between the two vectors $\mathbf{v}$ and $\mathbf{w}$ (when placed tail to tail). Then:
$$ \mathbf{v} \cdot \mathbf{w} = \lVert \mathbf{v} \rVert \lVert \mathbf{w} \rVert \cos \theta. \tag{2} $$
(b) (Cauchy–Schwarz Inequality)
$$ \lVert \mathbf{v} \cdot \mathbf{w} \rVert \le \lVert \mathbf{v} \rVert \lVert \mathbf{w} \rVert. \tag{3} $$
Proof: Consider the function
$$ f(t) = |\mathbf{v} - t \mathbf{w}|^2 = (\mathbf{v} - t \mathbf{w}) \cdot (\mathbf{v} - t \mathbf{w}) = \lVert \mathbf{v} \rVert^2 - 2t (\mathbf{v} \cdot \mathbf{w}) + t^2 \lVert \mathbf{w} \rVert^2. $$
Since $f(t) \ge 0$ for all $t \in \mathbb{R}$, choosing $t = \dfrac{\mathbf{v} \cdot \mathbf{w}}{\lVert \mathbf{w} \rVert^2}$ gives:
$$ 0 \le \lVert \mathbf{v} \rVert^2 - \frac{(\mathbf{v} \cdot \mathbf{w})^2}{\lVert \mathbf{w} \rVert^2}, $$
which implies the Cauchy–Schwarz inequality.
Orthogonal and Orthonormal Bases
An orthogonal basis for $V$ is a set $\mathbf{v_1}, \ldots, \mathbf{v_n}$ such that:
$$ \mathbf{v_i} \cdot \mathbf{v_j} = 0 \quad \forall i \ne j. $$
The basis is orthonormal if, in addition:
$$ \lVert \mathbf{v_i} \rVert = 1 \quad \forall i. $$
For an orthogonal basis $\mathbf{v_1}, \ldots, \mathbf{v_n}$, if
$$ \mathbf{v} = a_1 \mathbf{v_1} + \cdots + a_n \mathbf{v_n}, $$
then:
$$ \begin{aligned} \lVert \mathbf{v} \rVert^2 &= \lVert a_1 \mathbf{v_1} + \cdots + a_n \mathbf{v_n} \rVert^2 \\ &= \bigl(a_1 \mathbf{v_1} + \cdots + a_n \mathbf{v_n} \bigr)\cdot \bigl(a_1 \mathbf{v_1} + \cdots + a_n \mathbf{v_n} \bigr) \\ &= \sum_{i=1}^n \sum_{j=1}^n a_i a_j (\mathbf{v_i} \cdot \mathbf{v_j}) \\ &= \sum_{i=1}^n a_i^2 \lVert \mathbf{v_i} \rVert^2 \quad \text{since } \mathbf{v_i} \cdot \mathbf{v_j} = 0 ;; \text{for } i \ne j. \end{aligned} $$
If the basis is orthonormal, the formula simplifies to:
$$ \lVert \mathbf{v} \rVert^2 = \sum_{i=1}^n a_i^2. $$
Theorem 1 (Gram–Schmidt Algorithm)
Let $\mathbf{v_1}, \ldots, \mathbf{v_n}$ be a basis for $V \subset \mathbb{R}^m$. The following algorithm produces an orthogonal basis $\mathbf{v_1^*}, \ldots, \mathbf{v_n^*}$ for $V$:
Set $\mathbf{v_1^*} = \mathbf{v_1}$.
For $i = 2, 3, \ldots, n$:
- Compute:
$$ \mu_{ij} = \frac{\mathbf{v_i} \cdot \mathbf{v_j^*}}{\lVert \mathbf{v_j^*}\rVert^2}, \quad 1 \le j < i. $$
- Set:
$$ \mathbf{v_i^*} = \mathbf{v_i} - \sum_{j=1}^{i-1} \mu_{ij} \mathbf{v_j^*}. $$
Then:
$$ \text{Span}\left\lbrace \mathbf{v_1}, \ldots, \mathbf{v_i}\right\rbrace = \text{Span}\left\lbrace \mathbf{v_1^*}, \ldots, \mathbf{v_i^*}\right\rbrace, \quad \forall i = 1, \ldots, n. $$
Proof. The proof of orthogonality is by induction. We assume that the vectors $\mathbf{v^{*}_1}, \ldots, \mathbf{v^{*}_{i-1}}$ are mutually orthogonal, and we need to show that $\mathbf{v^{*}_i}$ is orthogonal to all preceding starred vectors. To do this, we choose any $k < i$ and compute:
$$ \begin{aligned} \mathbf{v^{*}_i} \cdot \mathbf{v^{*}_k} &= \left( \mathbf{v_i} - \sum_{j=1}^{i-1} \mu_{ij} \mathbf{v^{*}_j} \right) \cdot \mathbf{v^{*}_k} \\ &= \mathbf{v_i} \cdot \mathbf{v^{*}_k} - \mu_{ik} \lVert \mathbf{v^{*}_k}\rVert^2 \quad \text{(since $\mathbf{v^{*}_k} \cdot \mathbf{v^{*}_j} = 0$ for $j \ne k$)} \\ &= 0 \quad \text{by the definition of } \mu_{ik}. \end{aligned} $$
To prove the final statement about the spans, we first note from the definition of $\mathbf{v^{*}_i}$ that $\mathbf{v_i}$ is in the span of $\mathbf{v^{*}_1}, \ldots, \mathbf{v^{*}_i}$.
We prove the reverse inclusion by induction. Assume that $\mathbf{v^{*}_1}, \ldots, \mathbf{v^{*}_{i-1}}$ are in the span of $\mathbf{v_1}, \ldots, \mathbf{v_{i-1}}$, and we need to show that $\mathbf{v^{*}_i}$ is in the span of $\mathbf{v_1}, \ldots, \mathbf{v_i}$. But from the definition of $\mathbf{v^{*}_i}$, we see that it is in the span of $\mathbf{v^{*}_1}, \ldots, \mathbf{v^{*}_{i-1}}, \mathbf{v_i}$, so by the induction hypothesis, we are done.
In this post, we have journeyed through two pillars of modern algebra that profoundly influence cryptography: Group Theory and Vector Spaces. Group theory provides the precise language needed to describe the structure of systems like Elliptic Curve Cryptography (ECC).
However, the knowledge gained about vector spaces, bases, and especially orthogonality serves as the crucial stepping stone for what comes next. These concepts are the key to unlocking one of the most exciting and important fields in cryptography today: Lattice-based Cryptography. The Gram-Schmidt algorithm is not just a theoretical exercise; it is a fundamental tool used in the analysis and construction of quantum-resistant cryptosystems.
In the next article of this series, we will use this exact foundation to define what a “lattice” is and to explore why it holds so much promise for the future of cryptography.
References
- Satya Mandal. (2025, January 22). Part I: Groups and Subgroups. Retrieved from https://mandal.ku.edu/math791/spFteen791/PIGroups.pdf
- J.H. Silverman , Jill Pipher, and Jeffrey Hoffstein. An Introduction to Mathematical Cryptography. Retrieved from https://link.springer.com/book/10.1007/978-0-387-77993-5
Connect with Me
Connect with me on Facebook, LinkedIn, via email at dangduongminhnhat2003@gmail.com, GitHub, or by phone at +84 829 258 815.