Invertibility and Isomorphic Vector Spaces
Invertible
Definition
A linear map $T \in \mathcal{L}(V;W)$ is called invertible if there exists a linear map $S \in \mathcal{L}(W;V)$ such that $ST$ equals the identity map on $V$ and $TS$ equals the identity map on $W$.
Inverse
Definition
A linear map $S \in \mathcal{L}(W;V)$ satisfying $ST = I$ and $TS = I$ is called an inverse of $T$.
Remark
Note that the first $I$ is the identity map on $V$ and the second $I$ is the identity map on $W$
Inverse is unique
Definition
An invertible linear map has a unique inverse.
Proof
Suppose $S_1$ and $S_2$ are both inverses of $T$. Then
$S_1 = S_1I = S_1(TS_2) = (S_1T)S_2 = IS_2 = S_2$
$\square$
Notation
If $T$ is invertible, then its inverse is denoted by $T^{-1}$. In other words, if $T \in \mathcal{L}(V, W)$ is invertible, then $T^{-1}$ is the unique element of $\mathcal{L}(W, V)$ such that $T^{-1}T = I$ and $TT^{-1} = I$.
Invertibility is equivalent to injectivity and surjectivity
Theorem
A linear map is invertible if and only if it is injective and surjective.
Proof
Suppose $T \in \mathcal{L}(V,W)$. We need to show that $T$ is invertible
if and only if it is injective and surjective.
First suppose $T$ is invertible. To show that $T$ is injective, suppose
$u, v \in V$ and $Tu = Tv$. Then
so $u = v$. Hence $T$ is injective.
We are still assuming that $T$ is invertible. Now we want to prove that
$T$ is surjective. To do this, let $w \in W$. Then $w = T(T^{-1}w)$,
which shows that $w$ is in the range of $T$. Thus range $T = W$. Hence
$T$ is surjective, completing this direction of the proof.
Now suppose $T$ is injective and surjective. We want to prove that $T$
is invertible. For each $w \in W$, define $Sw$ to be the unique element
of $V$ such that $T(Sw) = w$ (the existence and uniqueness of such an
element follow from the surjectivity and injectivity of $T$). Clearly $T
\circ S$ equals the identity map on $W$.
To prove that $S \circ T$ equals the identity map on $V$, let $v \in V$.
Then
This equation implies that $(S \circ T)v = v$ (because $T$ is
injective). Thus $S \circ T$ equals the identity map on $V$.
To complete the proof, we need to show that $S$ is linear. To do this,
suppose $w_1, w_2 \in W$. Then
Thus $Sw_1 + Sw_2$ is the unique element of $V$ that $T$ maps to $w_1 + w_2$. By the definition of $S$, this implies that
\[ S(w_1 + w_2) = Sw_1 + Sw_2 \]
Hence $S$ satisfies the additive property required for linearity.
The proof of homogeneity is similar. Specifically, if $w \in W$ and
$\lambda \in \mathbb{F}$, then
Thus $\lambda Sw$ is the unique element of $V$ that $T$ maps to $\lambda w$. By the definition of $S$, this implies that
\[ S(\lambda w) = \lambda Sw \]Hence $S$ is linear, as desired.
$\square$
Isomorphisms
Definition
An isomorphism is an invertible linear map.
Isomorphic
Definition
Two vector spaces are called isomorphic if there is an isomorphism from one vector space onto the other one.
Dimension shows whether vector spaces are isomorphic
Theorem
Two finite-dimensional vector spaces over $\mathbb{F}$ are isomorphic if and only if they have the same dimension.
Proof
($\Rightarrow$) Suppose \(V \cong W\), i.e., there exists a linear isomorphism \(T: V \to W\). Since \(T\) is bijective, it maps a basis of \(V\) to a basis of \(W\). Hence the number of basis vectors is the same:
\[ \dim V = \dim W \]($\Leftarrow$) Suppose \(\dim V = \dim W = n\). Let \(\{v_1, \dots, v_n\}\) be a basis of \(V\) and \(\{w_1, \dots, w_n\}\) be a basis of \(W\). Define a linear map \(T: V \to W\) by
\[ T\Bigg(\sum_{i=1}^n \alpha_i v_i \Bigg) = \sum_{i=1}^n \alpha_i w_i \]
for scalars \(\alpha_1, \dots, \alpha_n \in \mathbb{F}\).
This map is well-defined and linear. Moreover, \(T\) is injective
because the kernel is \(\{0\}\) (the linear combination \(\sum \alpha_i
v_i = 0\) implies all \(\alpha_i = 0\)). It is surjective because any
\(w \in W\) can be written as a linear combination of the basis
\(\{w_i\}\). Hence \(T\) is a bijective linear map, and \(V \cong W\).
$\square$
Matrix Representation Isomorphism
Theorem
Suppose \(v_1, \dots, v_n\) is a basis of \(V\) and \(w_1, \dots, w_m\) is a basis of \(W\). Then \(M\) is an isomorphism between \( \mathcal{L} (V,W)\) and \(\mathbb{F}^{m,n}\).
Dimension of \( \mathcal{L} (V,W)\)
Theorem
Suppose \(V\) and \(W\) are finite-dimensional. Then \( \mathcal{L} (V,W)\) is finite- dimensional and
$$ \dim L(V,W) = ( \dim V ) \cdot ( \dim W ) $$Matrix of a vector
Definition
Suppose \( v \in V \) and \( v_1, \ldots, v_n \) is a basis of \( V \). The matrix of \( v \) with respect to this basis is the \( n \)-by-1 matrix
\[ M(v) = \begin{pmatrix} c_1 \\ \vdots \\ c_n \end{pmatrix} \]where \( c_1, \ldots, c_n \) are the scalars such that
\[ v = c_1 v_1 + \cdots + c_n v_n \]Column of a Linear Map's Matrix
Theorem
Suppose \( T \in \mathcal{L}(V, W) \) and \( v_1, \ldots, v_n \) is a basis of \( V \) and \( w_1, \ldots, w_m \) is a basis of \( W \). Let \( 1 \leq k \leq n \). Then the \( k^{\text{th}} \) column of \( M(T) \), which is denoted by \( M(T)_{\cdot,k} \), equals \( M(v_k) \).
Proof
The desired result follows immediately from the definitions of \( M(T)
\) and \( M(v_k) \).
The next result shows how the notions of the matrix of a linear map, the
matrix of a vector, and matrix multiplication fit together.
$\square$
Linear maps act like matrix multiplication
Theorem
Suppose \( T \in \mathcal{L}(V, W) \) and \( v \in V \). Suppose \( v_1, \ldots, v_n \) is a basis of \( V \) and \( w_1, \ldots, w_m \) is a basis of \( W \). Then
\[ M(Tv) = M(T)M(v) \]Operator
Definition
-
A linear map from a vector space to itself is called an operator.
-
The notation $\mathcal{L}(V)$ denotes the set of all operators on $V$. In other words, $\mathcal{L}(V) = \mathcal{L}(V,V)$.
Injectivity is equivalent to surjectivity in finite dimensions
Theorem
Suppose $V$ is finite-dimensional and $T \in \mathcal{L}(V)$. Then the following are equivalent:
$T$ is invertible
$T$ is injective
$T$ is surjective