diff options
Diffstat (limited to 'mathematics')
-rw-r--r-- | mathematics/linear-algebra.md | 192 |
1 files changed, 124 insertions, 68 deletions
diff --git a/mathematics/linear-algebra.md b/mathematics/linear-algebra.md index 7b9cc27..2645354 100644 --- a/mathematics/linear-algebra.md +++ b/mathematics/linear-algebra.md @@ -7,6 +7,84 @@ title: mathematics/linear algebra $$ℝ^n$$ +## Fields and Vector Spaces + +A **field** $F$ is a *set* with two *binary operation* $+$ and $×$ satisfying the following axioms: +- $(F, +)$ is a *commutative group*: + - associativity: $∀a,b,c : a + (b + c) = (a + b) + c$ + - additive identity: $∃0, ∀a : 0 + a = a + 0 = a$ + - additive inverse: $∀a, ∃-a : a + (-a) = 0$ + - commutativity: $∀a,b : a + b = b + a$ +- $(F, ×)$ is a *commutative group* + - associativity: $∀a,b,c : a(bc) = (ab)c$ + - multiplicative identity: $∃1, ∀a : 1a = a1 = a$ + - multiplicative inverse: $∀a ≠ 0, ∃\frac{1}{a} : a\frac{1}{a} = 1$ + - commutativity: $∀a,b : ab = ba$ +- $×$ is distributive with respect to $+$ + - distributivity: $∀a,b,c : a(b + c) = (ab) + (ac)$ + +Intuitively, a field is a set on which addition, subtraction, multiplication and division are defined and behave as they do on $ℝ$. We often omit the multiplication sign, and write $a × a$ as simply $aa$. It can also be thought of as a *commutative ring* with a multiplicative inverse (sans 0). + +A **vector space** $V$ *over* a field $F$ is a set with a binary operation $+$ and a binary function satisfying the following axioms: +- $(V, +)$ is a *commutative group*: + - associativity: $∀u,v,w : u + (v + w) = (u + v) + w$ + - additive identity: $∃0, ∀v : 0 + v = v + 0 = v$ + - additive inverse: $∀v, ∃-v : v + (-v) = 0$ + - commutativity: $∀u,v : u + v = v + u$ +- $(V,)$ is a *scalar operation* + - scalar identity: $∃1 ∈ F, ∀v : 1v = v1 = v$ + - commutativity: $∀a,b ∈ F, ∀v : (ab)v = a(bv)$ +- The *distributive laws* hold: + - $∀a ∈ F, ∀u,v ∈ V : a(u + v) = au + av$ + - $∀a,b ∈ F, ∀v ∈ V : (a + b)v = av + bv$ + +Our definition of our vector space leads us to some facts: +- The zero vector is *unique* and always present. + - Proof: Suppose there were two zero vectors: $0$ and $0'$. Then $0' = 0 + 0' = 0' + 0 = 0$. +- Vector spaces are *non-empty*. + - Proof: By definition, the zero vector exists. +- The additive inverse for some $x$ is *unique*. + - Proof: Suppose there were two inverses: $-x$ and $-x'$. Then $-x + x = 0$ and $-x + x + -x' = -x'$, and so as $x + -x' = 0$ $-x = -x'$. +- If $V$ is a vector space over $F$ and $V ≠ \\{0\\}$, then $V$ is an *infinite set* over $F$. + - Proof: you can just keep adding things + +<details markdown="block"> +<summary>Examples</summary> + +Let $S = \\{(a_1, a_2) | a_1, a_2 ∈ ℝ\\}$. +For $(a_1, a_2), (b_1, b_2) ∈ S$ and $c ∈ ℝ$, we define: +- $(a_1, a_2) + (b_1, b_2) = (a_1 + b_1, a_2 - b_2)$ +- $c(a_1, a_2) = (ca_1, ca_2)$. + +This fails commutativity! It is thus not a vector space. + +Let $S = \\{(a_1, a_2) | a_1, a_2 ∈ ℝ\\}$. We define: +- $(a_1, a_2) + (b_1, b_2) = (a_1 + b_1)$ +- $c(a_1, a_2) = (ca_1, 0)$ + +This fails the existence of zero! It is thus not a vector space. +</details> + +## Subspaces + +A subset $W$ of a vector space $V$ over a field 𝔽 is called a **subspace** of $V$ if $W$ is a *vector space* over 𝔽 with the operations of addition and scalar multiplication from $V$. +- . +- . + +A subset of $V$ is a **subspace** of V iff: +- the subset is non-empty +- the subset contains the zero vector +- it is closed under addition and multiplication + +Let $V$ be a vector space over $F$ and $S$ a nonempty subset of $V$. A vector $v \in V$ is a **linear combination** of vectors $s,t ∈ V$ if there exists a *finite* number of vectors $u_1, u_2, ..., u_n ∈ S$ and scalars $a_1, a_2, ..., a_n ∈ F$ such that $v = a_1 u_1 + a_2 u_2 + ... a_n u_n$. + +We call $a_1 ... a_n$ the *coefficients* of the linear combination. + + + +--- + + ## Introduction: The Complex Numbers A **complex number** is of the form $a + b\text{i}$, where $a, b$ are real numbers and $i$ represents the imaginary base. @@ -38,7 +116,7 @@ As long as $a^2 + b^2 ≠ 0$, the inverse of $z = a + bi$ is given by $z^{-1} = Let $z = a + bi$, where $a, b ∈ ℝ$. The **absolute value** of $z$ is defined as the real number $\sqrt{a^2 + b^2}$. This absolute value shares many properties. For $z, w ∈ ℂ$: -- $z \bar{z} = a^2 + b^2 = |z|^2$, where $|z| = \sqrt{a^2 + b^2} +- $z \bar{z} = a^2 + b^2 = |z|^2$, where $|z| = \sqrt{a^2 + b^2}$ - $\frac{z}{w} = \frac{|z|}{|w|}$ where $w ≠ 0$ - $|z + w| ≤ |z| + |w|$ - $|z| - |w| ≤ |z + w|$ @@ -51,57 +129,6 @@ If $c$ is a root, $\bar{c}$ is also a root! Let 𝔽 be one of the following sets: ℚ, ℝ, ℂ -A **vector space** $V$ over 𝔽 is a non-empty set on which two operations (addition `+` and scalar multiplication `*`) are defined such that for each pair of elements $x, y$ in V there is a unique element $x + y$ in V, and for each element $a$ in 𝔽 and each element $x$ in $V$... -- commutativity: $∀x,y ∈ V : x + y = y + x$ -- associativity: $∀x,y,z ∈ V : (x + y) + z = x + (y + z)$ -- additive identity: $∃𝕆 ∈ V : ∀x ∈ V, 𝕆 + x = x$ -- additive inverse: $∀x ∈ V, ∃y ∈ V : x + y = 𝕆$ -- commutativity: $∀a,b ∈ 𝔽, ∀ x ∈ V (ab)x = a(bx) -- distributivity: $∀a ∈ 𝔽, ∀x,y ∈ V : a(x + y) = ax + ay$ -- distributativity $∀a,b ∈ 𝔽, ∀x ∈ V : (a + b)x = ax + bx$ - -Another definition which somewhat more motivates the set being non-empty is as follows: A **vector space** is a set $V$ with additive and scalar multiplicative operations such that the following properties hold: -- commutativity: $∀u, v ∈ V : u + v = v + u$ -- associativity: $∀u, v, w \in V : (u + v) + w = u + (v + w)$ -- additive identity: $∃0 ∈ V : ∀v ∈ V, v + 0 = v$ -- additive inverse: $∀v ∈ V, ∃w ∈ V : v + w = 0$ -- multiplicative identity: $∀v ∈ V, 1v = v$ -- distributive properties: $∀a, b ∈ 𝔽, ∀u, v ∈ V : a(u + v) = au + av, (a + b)v = av + bv$ - -Our definition of a vector space leads to some facts: -- The zero vector is *unique* and always present. - - Proof: Suppose there were two zero vectors: 0 and 0'. Then 0' = 0 + 0' = 0' + 0 = 0. -- Vector spaces are *non-empty*. - - Proof: by definition, the zero vector exists. -- The additive inverse for some $x$ is *unique*. - - Proof: exercise -- If $V$ is a vector space over $𝔽$ and $V ≠ \\{0\\}$, then $V$ is an *infinite set*. (note this only holds over $𝔽$) - - Proof: you can just keep adding things - -Example: Let $S = \\{(a_1, a_2) | a_1, a_2 ∈ ℝ\\}$. -For $(a_1, a_2), (b_1, b_2) ∈ S$ and $c ∈ ℝ$, we define: -- $(a_1, a_2) + (b_1, b_2) = (a_1 + b_1, a_2 - b_2)$ -- $c(a_1, a_2) = (ca_1, ca_2)$. -- This fails commutativity! - -Example: Let $S = \\{(a_1, a_2) | a_1, a_2 ∈ ℝ\\}$. We define: -- $(a_1, a_2) + (b_1, b_2) = (a_1 + b_1)$ -- $c(a_1, a_2) = (ca_1, 0)$ -- fails zero! - -A subset $W$ of a vector space $V$ over a field 𝔽 is called a **subspace** of $V$ if $W$ is a *vector space* over 𝔽 with the operations of addition and scalar multiplication from $V$. -- . -- . - -A subset of $V$ is a **subspace** of V iff: -- the subset is non-empty -- the subset contains the zero vector -- it is closed under addition and multiplication - -Let $V$ be a vector space over $F$ and $S$ a nonempty subset of $V$. A vector $v \in V$ is a **linear combination** of vectors $s,t ∈ V$ if there exists a *finite* number of vectors $u_1, u_2, ..., u_n ∈ S$ and scalars $a_1, a_2, ..., a_n ∈ F$ such that $v = a_1 u_1 + a_2 u_2 + ... a_n u_n$. - -We call $a_1 ... a_n$ the *coefficients* of the linear combination. - https://math.stackexchange.com/questions/3492590/linear-combination-span-independence-and-bases-for-infinite-dimensional-vector Let $S$ be a nonempty subset of a vector space $V$. The **span** of $S$, denoted $span(S)$, is the set consisting of all linear combination of vectors in S. For convenience, we define $span(∅) = \\{0\\}$. @@ -115,7 +142,7 @@ A subspace $S$ over a vector space $V$ is **linearly dependent** if there exists A subset $S$ of a vector space that is not linearly dependent is **linearly independent**. -Example: Consider the following set: $S = {(1, 0, 0, -1), (0, 1, 0, -1), (0, 0, 1, -1), (0, 0, 0, 1)}$ +Example: Consider the following set: $S = \\{(1, 0, 0, -1), (0, 1, 0, -1), (0, 0, 1, -1), (0, 0, 0, 1)\\}$ Assume that $a v_1 + a_2 v_2 + a_3 v_3 + a_4 v_4 = 0$. then... as the determinant is nonzero, S is linearly independent. @@ -123,11 +150,11 @@ as the determinant is nonzero, S is linearly independent. Let $V$ be a vector space, and let $S_1 ⊆ S_2 ⊆ V$. If $S_1$ is linearly dependent, then $S_2$ is linearly dependent. If $S_2$ is linearly independent, then $S_1$ is also linearly independent. -Let $S$ be a linearly independent subset of a vector space $V$, and let $v ∈ V : v ∉ S$. Then $S ∪ {v}$ is linearly independent iff $v ∈ span(S)$. +Let $S$ be a linearly independent subset of a vector space $V$, and let $v ∈ V : v ∉ S$. Then $S ∪ \\{v\\}$ is linearly independent iff $v ∈ span(S)$. A basis $B$ for a vector space $V$ is a *linearly independent* subset of $V$ that *spans* $V$. If $B$ is a basis for $V$, we also say that the vectors of $B$ form a basis for $V$. -Let $V$ be a vector space and $β = {v_1, ..., v_n}$ be a subset of V. Then β is a basis for V iff every $v ∈ V$ can be **uniquely expressed** as a linear combination of vectors of β. that is, V can be written in the form $v = a_1 u_1 + a_2 u_2 ... a_n u_n$ for unique scalars a. +Let $V$ be a vector space and $β = \\{v_1, ..., v_n\\}$ be a subset of V. Then β is a basis for V iff every $v ∈ V$ can be **uniquely expressed** as a linear combination of vectors of β. that is, V can be written in the form $v = a_1 u_1 + a_2 u_2 ... a_n u_n$ for unique scalars a. If a vector space V is spanned by a finite set S, then some subset of S is a basis of V. So, V has a finite basis. Proof: If $S = ∅$, then $V = span{S} = span{∅} = \span{𝕆}$ in which case ∅ is a basis for $V$. @@ -146,8 +173,7 @@ Theorem 1.4: Let $W$ be a subspace of a finite-dimensional vector space $V$. The Let $V$ and $W$ be vector spaces (over a field $F$). -A function $T: V → W$ is a **linear transformation** from $V$ into $W$ if $∀x,y ∈ V, c ∈ F$ we have $T(cx + y) = cT(x) + T(y)$. -Subsequently: +A function $T: V → W$ is a **linear transformation** from $V$ into $W$ if $∀x,y ∈ V, c ∈ F$ we have $T(cx + y) = cT(x) + T(y)$. Subsequently: - $T(x + y) = T(x) + T(y)$ - $T(cx) = cT(x)$ - $T(0) = 0$ @@ -217,7 +243,8 @@ Theorem: Let $T, U : V → W$ be linear. Then their sum and scalar products are ... </details> -Theorem: The set of all linear transformations (via our definitions of addition and scalar multiplication above) $V → W$ forms a vector space over $F$. We denote this as $\mathcal{L}(V, W)$. If $V = W$, we write $\mathcal{L}(V)$. +Theorem: The set of all linear transformations (via our definitions of addition and scalar multiplication above) $V → W$ forms a vector space over $F$. We denote this as $\mathcal{L}(V, W)$. +- If $V = W$, we write $\mathcal{L}(V)$. <details markdown="block"> <summary>Proof</summary> ... @@ -235,11 +262,12 @@ $$= U(T(cx + y)) = U(cT(x) + T(y))$$ $$= cU(T(x)) + U(T(y)) = c(UT)(x) + UT(y)$$ </details> -Theorem: Let $T, U_1, U_2 ∈ \mathcal{L}(V)$. Then: -- $T(U_1 + U_2) = TU_1 + TU_2$ and $(U_1 + U_2)T = U_1 T + U_2 T$ +Theorem: Let $T, U_1, U_2 ∈ \mathcal{L}(V)$, and $a ∈ F$. Then: +- $T(U_1 + U_2) = TU_1 + TU_2$ +- $(U_1 + U_2)T = U_1 T + U_2 T$ - $T(U_1 U_2) = (TU_1) U_2$ - $TI = IT = T$ -- $∀a ∈ F : a(U_1 U_2) = (aU_1) U_2 = U_1 (aU_2)$ +- $a(U_1 U_2) = (aU_1) U_2 = U_1 (aU_2)$ <details markdown="block"> <summary>Proof</summary> ... @@ -278,8 +306,8 @@ Lemma: For finite-dimensional $V$ and $W$: If $T: V → W$ is invertible, then $ ... </details> -Theorem: $V$ is isomorphic to $W$ iff $dim(V) = dim(W)$. -- Subsequently, $V$ is isomorphic to $F^n$ iff $dim(V) = n$. +Theorem: $V$ is isomorphic to $W$ iff $dim(V) = dim(W)$. Subsequently: +- $V$ is isomorphic to $F^n$ iff $dim(V) = n$. <details markdown="block"> <summary>Proof</summary> ... @@ -296,7 +324,7 @@ An **ordered basis** of a finite-dimensional vector space $V$ is, well, an order - For the vector space $F^n$ we call $\\{ e_1, e_2, ..., e_n \\}$ the **standard ordered basis** for $F^n$. - For the vector space $P_n(F)$ we call $\\{ 1, x, ..., x^n \\}$ the **standard ordered basis** for $P_n(F)$. -Let $a_1, a_2, ... a_n$ be the unique scalars such that $x = Σ_{i=1}^n a_i u_i$ for all $x ∈ V$. The **coordinate vector** of $x$ relative to $β$ is $(a_1, ..., a_n)$ (vert) and denoted $[x]_β$. +Let $a_1, a_2, ... a_n$ be the unique scalars such that $x = Σ_{i=1}^n a_i u_i$ for all $x ∈ V$. The **coordinate vector** of $x$ relative to $β$ is $\begin{pmatrix} a_1 \\ a_2 \\ ... \\ a_n \end{pmatrix}$ (vert) and denoted $[x]_β$. The $m × n$ matrix $A$ defined by $A_{ij} = a_{ij}$ is called the **matrix representation of $T$ in the ordered bases $β$ and $γ$**, and denoted as $A = [T]_β^γ$. If $V = W$ and $β = γ$, we write $A = [T]_β$. @@ -325,17 +353,45 @@ Theorem: For finite-dimensional $V$ and $W$: $T$ is invertible iff $[T]_β^γ$ i </details> Theorem: Let $V$ and $W$ be of finite dimensions $n$ and $m$ with ordered finite bases $β$ and $γ$, respectively. The function $Φ : \mathcal{L}(V,W) → M_{m×n}(F)$ where $Φ(T) = [T]_β^γ$ for all $T ∈ \mathcal{L}(V,W)$ is an isomorphism. +That is, the set (really *vector space*) of all linear transformations between two vector spaces $V$ and $W$ itself is isomorphic to the vector space of all $m × n$ matrices. Subsequently: +- For $V$ and $W$ of finite dimensions $n$ and $m$, $\mathcal{L}(V,W)$ is of finite dimension $mn$. <details markdown="block"> <summary>Proof</summary> ... </details> -That is, the set (really *vector space*) of all linear transformations between two vector spaces $V$ and $W$ itself is isomorphic to the vector space of all $m × n$ matrices. -- Subsequently, for $V$ and $W$ of finite dimensions $n$ and $m$, $\mathcal{L}(V,W)$ is of finite dimension $mn$. -Let $β$ be an ordered basis for an $n$-dimensional $V$. The **standard representation** of $V$ with respect to $β$ is the function $ϕ_β : V → F^n$ where $ϕ_β(x) = [x]_β$ for all $x ∈ V$. +The **standard representation** of an $n$-dimensional $V$ with respect to its ordered basis $β$ is the function $ϕ_β : V → F^n$ where $ϕ_β(x) = [x]_β$ for all $x ∈ V$. -Theorem: FOr any $V$ with ordered basis $β$, $ϕ_β$ is an isomorphism. +Theorem: For any $V$ with ordered basis $β$, $ϕ_β$ is an isomorphism. <details markdown="block"> <summary>Proof</summary> ... </details> + +## The Change of Coordinate Matrix + +## Dual Spaces + +## Homogeneous Linear Differential Equations + +## Systems of Linear Equations + +This section is mostly review. + +## Determinants + +Let $A \begin{bmatrix} a & b \\ c & d \end{bmatrix}$. + +We define the **determinant** of any 2 × 2 matrix $A$ to be the scalar $ad - bc$, and denote it $det(A)$ or $|A|$. + +... + +Let $Ax = b$ be the matrix form of a system of $n$ linear equations in $n$ unknowns (where $x = (x_1, x_2, ..., x_n)^t$). + +Cramer's Rule: If $det(A) ≠ 0$, then the system $Ax = b$ has a *unique* solution, and for each $k$ from $1$ to $n$, $x_k = [det(A)]^{-1} ∙ det(M_k)$, where $M_k$ is the $n × n$ matrix obtained from $A$ by replacing column $k$ of $A$ by $b$. +<details markdown="block"> +<summary>Proof</summary> +... +</details> + +... |