summaryrefslogtreecommitdiff
path: root/mathematics
diff options
context:
space:
mode:
Diffstat (limited to 'mathematics')
-rw-r--r--mathematics/linear-algebra.md103
1 files changed, 55 insertions, 48 deletions
diff --git a/mathematics/linear-algebra.md b/mathematics/linear-algebra.md
index a65f741..4e1162b 100644
--- a/mathematics/linear-algebra.md
+++ b/mathematics/linear-algebra.md
@@ -75,16 +75,16 @@ Our definition of a vector space leads to some facts:
- Proof: by definition, the zero vector exists.
- The additive inverse for some $x$ is *unique*.
- Proof: exercise
-- If $V$ is a vector space over $𝔽$ and $V ≠ \{0\}$, then $V$ is an *infinite set*. (note this only holds over $𝔽$)
+- If $V$ is a vector space over $𝔽$ and $V ≠ \\{0\\}$, then $V$ is an *infinite set*. (note this only holds over $𝔽$)
- Proof: you can just keep adding things
-Example: Let $S = \{(a_1, a_2) | a_1, a_2 ∈ ℝ\}$.
+Example: Let $S = \\{(a_1, a_2) | a_1, a_2 ∈ ℝ\\}$.
For $(a_1, a_2), (b_1, b_2) ∈ S$ and $c ∈ ℝ$, we define:
- $(a_1, a_2) + (b_1, b_2) = (a_1 + b_1, a_2 - b_2)$
- $c(a_1, a_2) = (ca_1, ca_2)$.
- This fails commutativity!
-Example: Let $S = \{(a_1, a_2) | a_1, a_2 ∈ ℝ\}$. We define:
+Example: Let $S = \\{(a_1, a_2) | a_1, a_2 ∈ ℝ\\}$. We define:
- $(a_1, a_2) + (b_1, b_2) = (a_1 + b_1)$
- $c(a_1, a_2) = (ca_1, 0)$
- fails zero!
@@ -104,7 +104,7 @@ We call $a_1 ... a_n$ the *coefficients* of the linear combination.
https://math.stackexchange.com/questions/3492590/linear-combination-span-independence-and-bases-for-infinite-dimensional-vector
-Let $S$ be a nonempty subset of a vector space $V$. The **span** of $S$, denoted $span(S)$, is the set consisting of all linear combination of vectors in S. For convenience, we define $span(∅) = \{0\}$.
+Let $S$ be a nonempty subset of a vector space $V$. The **span** of $S$, denoted $span(S)$, is the set consisting of all linear combination of vectors in S. For convenience, we define $span(∅) = \\{0\\}$.
The span of any subset $S$ of a vector space $V$ is a subspace of $V$.
@@ -127,7 +127,7 @@ Let $S$ be a linearly independent subset of a vector space $V$, and let $v ∈ V
A basis $B$ for a vector space $V$ is a *linearly independent* subset of $V$ that *spans* $V$. If $B$ is a basis for $V$, we also say that the vectors of $B$ form a basis for $V$.
-Let V be a vector space and β = {v_1, ..., v_n} be a subset of V. Then β is a basis for V iff every v ∈ V can be **uniquely expressed** as a linear combination of vectors of β. that is, V can be written in the form v = a_1 u_1 + a_2 u_2 ... a_n u_n for unique scalars a.
+Let $V$ be a vector space and $β = {v_1, ..., v_n}$ be a subset of V. Then β is a basis for V iff every $v ∈ V$ can be **uniquely expressed** as a linear combination of vectors of β. that is, V can be written in the form $v = a_1 u_1 + a_2 u_2 ... a_n u_n$ for unique scalars a.
If a vector space V is spanned by a finite set S, then some subset of S is a basis of V. So, V has a finite basis.
Proof: If $S = ∅$, then $V = span{S} = span{∅} = \span{𝕆}$ in which case ∅ is a basis for $V$.
@@ -144,9 +144,9 @@ Theorem 1.4: Let $W$ be a subspace of a finite-dimensional vector space $V$. The
## Linear Transformations
-Let $V$ and $W$ be vector spaces (over $F$).
+Let $V$ and $W$ be vector spaces (over a field $F$).
-Definition: A function $T: V → W$ is a **linear transformation** from $V$ into $W$ if $∀x,y ∈ V, c ∈ F$ we have $T(cx + y) = cT(x) + T(y)$.
+A function $T: V → W$ is a **linear transformation** from $V$ into $W$ if $∀x,y ∈ V, c ∈ F$ we have $T(cx + y) = cT(x) + T(y)$.
Subsequently:
- $T(x + y) = T(x) + T(y)$
- $T(cx) = cT(x)$
@@ -155,9 +155,8 @@ Subsequently:
Let $T: V → W$ be a linear transformation.
-Definition:
-The **kernel** (or null space) $N(T)$ of $T$ is the set of all vectors in $V$ such that $T(x) = 0$: $N(T) = \{ x ∈ V : T(x) = 0 \}$.
-The **image** (or range) $R(T)$ of $T$ is the subset of $W$ consisting of all images (under $T$) of elements of $V$: $R(T) = \{ T(x) : x ∈ V \}$
+The **kernel** (or null space) $N(T)$ of $T$ is the set of all vectors in $V$ such that $T(x) = 0$: $N(T) = \\{ x ∈ V : T(x) = 0 \\}$.
+The **image** (or range) $R(T)$ of $T$ is the subset of $W$ consisting of all images (under $T$) of elements of $V$: $R(T) = \\{ T(x) : x ∈ V \\}$
Theorem: The kernel $N(T)$ and image $R(T)$ are subspaces of $V$ and $W$, respectively.
<details>
@@ -170,13 +169,13 @@ Let $x,y ∈ R(T)$ and $c ∈ F$. As $T(0_v) = 0_w$, $0_w ∈ R(T)$.
...
</details>
-Theorem: If $β = \{ v_1, v_2, ... v_n \}$ is a basis for $V$, then $R(T) = span(\{ T(v_1), T(v_2), ..., T(v_n) \})$.
+Theorem: If $β = \\{ v_1, v_2, ... v_n \\}$ is a basis for $V$, then $R(T) = span(\\{ T(v_1), T(v_2), ..., T(v_n) \\})$.
<details>
<summary>Proof</summary>
...
</details>
-Definition: If $N(T)$ and $R(T)$ are finite-dimensional, then the **nullity** and **rank** of T are the dimensions of $N(T)$ and $R(T)$, respectively.
+If $N(T)$ and $R(T)$ are finite-dimensional, then the **nullity** and **rank** of T are the dimensions of $N(T)$ and $R(T)$, respectively.
Rank-Nullity Theorem: If $V$ is *finite-dimensional*, then $dim(V) = nullity(T) + rank(T)$.
<details>
@@ -184,76 +183,84 @@ Rank-Nullity Theorem: If $V$ is *finite-dimensional*, then $dim(V) = nullity(T)
...
</details>
-Definition:
Recall that a *function* definitionally maps *each* element of its domain to *exactly* one element of its codomain.
-A function is **injective** (or one-to-one) iff each element of its domain maps to a *distinct* element of its codomain.
-A function **surjective** (or onto) iff each element of the codomain is mapped to by *at least* one element in the domain.
+A function is **injective** (or **one-to-one**) iff each element of its domain maps to a *distinct* element of its codomain.
+A function is **surjective** (or **onto**) iff each element of the codomain is mapped to by *at least* one element in the domain.
A function is **bijective** iff it is surjective and injective. Necessarily, a bijective function is invertible, which will be formally stated & proven later.
-Theorem: $T$ is injective iff $N(T) = \{0\}$.
+Theorem: $T$ is injective iff $N(T) = \\{0\\}$.
<details>
<summary>Proof</summary>
...
</details>
-Theorem: For $V$ and $W$ of equal (finite) dimension: $T$ is injective iff it is surjective.
+Theorem: For $V$ and $W$ of equal (and finite) dimension: $T$ is injective iff it is surjective.
<details>
<summary>Proof</summary>
...
</details>
-Theorem: Suppose that $V$ is finite-dimensional with a basis $\{ v_1, v_2, ..., v_n \}$. For any vectors $w_1, w_2, ... w_n$ in $W$, there exists *exactly* one linear transformation such that $T(v_i) = w_i$ for $i = 1, 2, ..., n$.
+Theorem: Suppose that $V$ is finite-dimensional with a basis $\\{ v_1, v_2, ..., v_n \\}$. For any vectors $w_1, w_2, ... w_n$ in $W$, there exists *exactly* one linear transformation such that $T(v_i) = w_i$ for $i = 1, 2, ..., n$.
<details>
<summary>Proof</summary>
...
</details>
-## Linear Transformations as Matrices
-
-- Let $V, W$ be finite-dimensional vector spaces.
-- Let $T, U : V → W$ be linear transformations from $V$ to $W$.
-- Let $β$ and $γ$ be ordered bases of $V$ and $W$, respectively.
-- Let $a ∈ F$ be a scalar.
-
-Definition: An **ordered basis** of a finite-dimensional vector space $V$ is, well, an ordered basis of $V$. We represent this with exactly the same notation as a standard unordered basis, but will call attention to it whenever necessary.
-- For the vector space $F^n$ we call $\{ e_1, e_2, ..., e_n \}$ the **standard ordered basis** for $F^n$.
-- For the vector space $P_n(F)$ we call $\{ 1, x, ..., x^n \}$ the **standard ordered basis** for $P_n(F)$.
-
-Definition: Let $a_1, a_2, ... a_n$ be the unique scalars such that $x = Σ_{i=1}^n a_i u_i$ for all $x ∈ V$. The **coordinate vector** of $x$ relative to $β$ is $(a_1, ..., a_n)$ (vert) and denoted $[x]_β$.
-
-Definition: The $m × n$ matrix $A$ defined by $A_{ij} = a_{ij}$ is called the **matrix representation of $T$ in the ordered bases $β$ and $γ$**, and denoted as $A = [T]_β^γ$. If $V = W$ and $β = γ$, we write $A = [T]_β$.
+## Composition of Linear Transformations
-Definition: Let $T, U : V → W$ be arbitrary functions. Let $a ∈ F$. We define $T + U : V → W$ as $(T + U)(x) = T(x) + U(x)$ for all $x ∈ V$, and $aT : V → W$ as $(aT)(x) = aT(x)$ for all $x ∈ V$.
+Let $V$, $W$, and $Z$ be vector spaces.
-Theorem: The set of all linear transformations (via our definitions of addition and scalar multiplication above) $V → W$ forms a vector space over $F$.
+Theorem: The set of all linear transformations (via our definitions of addition and scalar multiplication above) $V → W$ forms a vector space over $F$. We denote this as $\mathcal{L}(V, W)$. If $V = W$, we write $\mathcal{L}(V)$.
<details>
<summary>Proof</summary>
...
</details>
-Definition: The vector space of all linear transformations $V → W$ is denoted by $\mathcal{L}(V, W)$. If $V = W$, we write $\mathcal{V}$.
+Let $T, U : V → W$ be arbitrary functions. We define **addition** $T + U : V → W$ as $∀x ∈ V : (T + U)(x) = T(x) + U(x)$, and **scalar multiplication** $aT : V → W$ as $∀x ∈ V : (aT)(x) = aT(x)$ for all $a ∈ F$.
-Theorem: $[T + U]_β^γ = [T]_β^γ + [U]_β^γ$ and $[aT]_β^γ = a[T]_β^γ$.
+Theorem: Let $T : V → W$ and $U : W → Z$ be linear. Then their composition $UT : V → Z$ is linear.
+<details>
+<summary>Proof</summary>
+Let $x,y ∈ V$ and $c ∈ F$. Then:
+
+$$UT(cx + y)$$
+$$= U(T(cx + y)) = U(cT(x) + T(y))$$
+$$= cU(T(x)) + U(T(y)) = c(UT)(x) + UT(y)$$
+</details>
+
+Theorem: Let $T, U_1, U_2 ∈ \mathcal{L}(V)$. Then:
+- $T(U_1 + U_2) = TU_1 + TU_2$ and $(U_1 + U_2)T = U_1 T + U_2 T$
+- $T(U_1 U_2) = (TU_1) U_2$
+- $TI = IT = T$
+- $∀a ∈ F : a(U_1 U_2) = (aU_1) U_2 = U_1 (aU_2)$
<details>
<summary>Proof</summary>
...
+<!-- A more general result holds for linear transformations with domains unequal to their codomains, exercise 7 -->
</details>
-## Composition of Linear Transformations
+## Linear Transformations as Matrices
-Let $V$, $W$, and $Z$ be vector spaces.
+- Let $V, W$ be finite-dimensional vector spaces.
+- Let $T, U : V → W$ be linear transformations from $V$ to $W$.
+- Let $β$ and $γ$ be ordered bases of $V$ and $W$, respectively.
+- Let $a ∈ F$ be a scalar.
-Theorem: Let $T : V → W$ and $U : W → Z$ be linear. Then their composition $UT : V → Z$ is linear.
+An **ordered basis** of a finite-dimensional vector space $V$ is, well, an ordered basis of $V$. We represent this with exactly the same notation as a standard unordered basis, but will call attention to it whenever necessary.
+- For the vector space $F^n$ we call $\\{ e_1, e_2, ..., e_n \\}$ the **standard ordered basis** for $F^n$.
+- For the vector space $P_n(F)$ we call $\\{ 1, x, ..., x^n \\}$ the **standard ordered basis** for $P_n(F)$.
+
+Let $a_1, a_2, ... a_n$ be the unique scalars such that $x = Σ_{i=1}^n a_i u_i$ for all $x ∈ V$. The **coordinate vector** of $x$ relative to $β$ is $(a_1, ..., a_n)$ (vert) and denoted $[x]_β$.
+
+The $m × n$ matrix $A$ defined by $A_{ij} = a_{ij}$ is called the **matrix representation of $T$ in the ordered bases $β$ and $γ$**, and denoted as $A = [T]_β^γ$. If $V = W$ and $β = γ$, we write $A = [T]_β$.
+
+Theorem: $[T + U]_β^γ = [T]_β^γ + [U]_β^γ$ and $[aT]_β^γ = a[T]_β^γ$.
<details>
<summary>Proof</summary>
-Let $x,y ∈ V$ and $c ∈ F$. Then:
-$$UT(cx + y)$$
-$$= U(T(cx + y)) = U(cT(x) + T(y))$$
-$$= cU(T(x)) + U(T(y)) = c(UT)(x) + UT(y)$$
+...
</details>
-Definition: Let $T,
-...
+---
## Invertibility and Isomorphism
@@ -261,7 +268,7 @@ Let $V$ and $W$ be vector spaces.
Let $T: U → V$ be a linear transformation.
Let $I_V: V → V$ and $I_W: W → W$ denote the identity transformations within $V$ and $W$, respectively.
-Definition: A function $U: W → V$ is an **inverse** of $T$ if $TU = I_W$ and $UT = I_V$. If $T$ has an inverse, then $T$ is **invertible**.
+A function $U: W → V$ is an **inverse** of $T$ if $TU = I_W$ and $UT = I_V$. If $T$ has an inverse, then $T$ is **invertible**.
Theorem: Consider a linear function $T: V → W$.
- If $T$ is invertible, it has a *unique* inverse $T^{-1}$.
@@ -278,7 +285,7 @@ Theorem: If $T$ is linear and invertible, $T^{-1}$ is linear and invertible.
...
</details>
-Definition: Let $A$ be a $n × n$ matrix. Then $A$ is **invertible** iff there exists an $n × n$ matrix $B$ such that $AB = BA = I$.
+Let $A$ be a $n × n$ matrix. Then $A$ is **invertible** iff there exists an $n × n$ matrix $B$ such that $AB = BA = I$.
Theorem: If $A$ is invertible, the matrix $B$ is unique, and denoted $A^{-1}$.
<details>
@@ -286,7 +293,7 @@ Theorem: If $A$ is invertible, the matrix $B$ is unique, and denoted $A^{-1}$.
Suppose there existed another inverse matrix $C$. Then $C = CI = C(AB) = (CA)B = IB = B$.
</details>
-Definition: $V$ is **isomorphic** to $W$ if there exists an *invertible* linear transformation $T : V → W$ (an **isomorphism**).
+$V$ is **isomorphic** to $W$ if there exists an *invertible* linear transformation $T : V → W$ (an **isomorphism**).
Lemma: For finite-dimensional $V$ and $W$: If $T: V → W$ is invertible, then $dim(V) = dim(W)$.
<details>