Algorithm which takes a matrix $A \in F^{m \times n}$ and applies a sequence of invertible elementary row operations (row reordering, scaling, and replacement) $E \in F^{k \times m}$ to $A$, producing its row echelon form. Furthermore, the algorithm can be extended to produce its row-reduced echelon form.
Gaussian Elimination (**A**):
for **row** i = 1 to j:
1. find the leftmost nonzero column of **A** ; **pivot column
2.** apply row operations until the ith entry is nonzero ; **pivot
3.** eliminate all non-zero entries below the pivot using row operations
<aside> <img src="/icons/bookmark_gray.svg" alt="/icons/bookmark_gray.svg" width="40px" />
Definition. The row echelon form of $A \in F^{m \times n}$ is the transformation of $A$ to a row equivalent matrix $B$, that is to say $B = EA$ for some sequence of row operations $E$, s.t.
Furthermore, if a row equivalent matrix $C$ satisfies the above along with the requirements that each pivot is $1$ and all entries above each pivot are zero, then $C$ is called the reduced row echelon form of $A$.
</aside>
The main motivation behind finding such forms of matrices comes from augmented matrices, which concatenate two matrices $A$ and $B$ together into $(A | B)$, allowing for the left-inverse to be solved for (if it exists). This is because the reduced row echelon form of an invertible matrix is simply the identity, meaning that the sequence of row operations applied to it is its inverse.
$$ AA^{-1} = I \rightarrow EAA^{-1} = A^{-1} = E $$
<aside> <img src="/icons/castle_yellow.svg" alt="/icons/castle_yellow.svg" width="40px" />
Theorem. For $A \in F^{m \times n}$, $x \in F^n$, $y \in F^{m}$ and any row echelon form $B$ of $A$,
<aside> <img src="/icons/castle_yellow.svg" alt="/icons/castle_yellow.svg" width="40px" />
Theorem. For $A \in F^{m \times n}$ s.t. $A = ( w_1 \dots w_n)$ and any row echelon form $B$ of $A$,
For a finite dimensional vector space $\mathbf{V}$ with bases $\beta$ and $\gamma$, a change in basis from $\beta$ to $\gamma$ with respect to an operator $T \in \mathscr{L}(\mathbf{V})$ is a change in the coordinates its associated matrix is defined in terms of.
$$ [T]\gamma = [I]\beta^\gamma [T]\beta [I]\gamma ^\beta $$
<aside> <img src="/icons/chess-pawn_brown.svg" alt="/icons/chess-pawn_brown.svg" width="40px" />
Proposition. The inverse of the identity from $\beta$ to $\gamma$ is the identity from $\gamma$ to $\beta$.
$$ ([I]\beta ^\gamma)^{-1} = [I]\gamma ^\beta $$
For a finite-dimensional vector space $\mathbf{V}$ over a field $F$, its dual space $\mathbf{V}'$ is the space of all linear functionals, linear maps from a vector space to its underlying field, from $\mathbf{V}$ to $F$.