\documentclass{article} \usepackage{amssymb,amsmath} \hyphenation{Birk-hoff} \def\course{MT--A315--01} \def\semester{Fall 2003} \def\PostscriptAdjustment{\addtolength{\topmargin}{49pt}} \renewcommand{\thefootnote}{\fnsymbol{footnote}} \renewcommand{\geq}{\geqslant} \renewcommand{\leq}{\leqslant} \def\psfile#1#2#3{\begin{center} \begin{figure}[ht] \begin{center} \ \psfig{figure=#1.ps,height=#2in,width=#3in} \end{center} \end{figure} \end{center}} \def\exp#1{^{\raise3pt\hbox{$\scriptstyle{#1}$}}} \def\tothe#1#2{^{\raise#1pt\hbox{$\scriptstyle{#2}$}}} \def\sub#1#2{_{\raise-#1pt\hbox{$\scriptstyle{#2}$}}} \def\lh{\stackrel{\mathrm{\scriptscriptstyle{L'H}}}{=}} \def\arccsc{\mathrm{arccsc}\,} \def\arcsec{\mathrm{arcsec}\,} \def\arccot{\mathrm{arccot}\,} \def\({\left(} \def\){\right)} \def\[{\left[} \def\]{\right]} \def\={\quad = \quad} \def\+{\quad + \quad} \def\-{\quad - \quad} \def\dx{\, dx} \def\dy{\, dy} \def\dz{\, dz} \def\dt{\, dt} \def\q{\quad} \def\qq{\qquad} \def\w{\wedge} \def\F{\mathbf F} \def\cl{\centerline} \def\grad{\nabla} \def\ni{\noindent} \def\ms{\medskip} \def\bs{\bigskip} \def\ss{\smallskip} \def\vf{\vfill} \def\br{\break} \def\i{{\mathbf i}} \def\j{{\mathbf j}} \def\k{{\mathbf k}} \def\sol{{\noindent {\bf Solution.} }} \def\le{\left|} \def\ri{\right|} \def\p{\partial} \def\rh{\rho} \def\ph{\phi} \def\th{\theta} \def\D{\mathbf D} \def\la{\longrightarrow} \def\La{\Longrightarrow} \def\Vec{\overrightarrow} \def\bm{\left[\begin{matrix}} \def\em{\end{matrix}\right]} \def\det{\left|\begin{matrix}} \def\edet{\end{matrix}\right|} \def\dint{\int\!\!\!\int} \def\trint{\int\!\!\!\int\!\!\!\int} \def\l{\lambda} \def\pb{\vfill \pagebreak} \def\hb{\vskip 4in} \def\ds{\displaystyle} \def\curl{\mathrm{curl}} \flushbottom \def\singlepage{ \addtolength{\textheight}{154pt} \addtolength{\topmargin}{-77pt} \addtolength{\textwidth}{154pt} \addtolength{\oddsidemargin}{-77pt} \pagestyle{empty}} \def\multipage{ \addtolength{\textheight}{134pt} \addtolength{\topmargin}{-78pt} \addtolength{\textwidth}{150pt} \addtolength{\oddsidemargin}{-75pt} } \multipage % \PostscriptAdjustment \def\margintopbottomaddition{3.25pt} \def\marginaddition{2.85pt} \addtolength{\textheight}{-\margintopbottomaddition} \addtolength{\textheight}{-\margintopbottomaddition} \addtolength{\topmargin}{\margintopbottomaddition} \addtolength{\textwidth}{\marginaddition} \addtolength{\textwidth}{\marginaddition} \addtolength{\oddsidemargin}{-\marginaddition} \begin{document} \setlength{\topskip}{0in} \noindent \begin{minipage}[c]{1.2in} {\footnotesize {\sc St.\ Louis University\\ \semester}} \end{minipage} \hfill \begin{minipage}[c]{4.0in} \begin{center} {\LARGE {\bf Solutions:\ Final Exam}} \end{center} \end{minipage} \hfill \begin{minipage}[c]{1.2in} {\footnotesize {\sc \flushright \course\\ \hfill Prof.\ G. Marks}} \end{minipage} \bs \bs \it \ni 1. (13 pts.) {\it No partial credit is available on this problem, so be meticulous.} Write down the set of all solutions $\,(x,y,z) \in \mathbb{R}^{3}\,$ to the following system of equations: \begin{equation} \begin{array}{ccccccc} 2x&-&y&-&3z &=& -6 \\ \noalign{\ms} 3x&-&8y&-&2z &=& 10 \\ \noalign{\ms} -x&+&2y&+&z &=& -1 \end{array} \label{f2003math315finaleqn1} \end{equation} \bs \rm \sol Here is one approach. Add $2$ times the third row to the second row and $3$ times the third row to the first row; then add the second row to the first row: $$(\ref{f2003math315finaleqn1}) % \begin{array}{ccccccc} % 2x&-&y&-&3z &=& -6 \\ \noalign{\ms} % 3x&-&8y&-&2z &=& 10 \\ \noalign{\ms} % -x&+&2y&+&z &=& -1 % \end{array} \qq\Longleftrightarrow\qq \begin{array}{ccccccc} -x&+&5y& & &=& -9 \\ \noalign{\ms} x&-&4y& & &=& 8 \\ \noalign{\ms} -x&+&2y&+&z &=& -1 \end{array} \qq\Longleftrightarrow\qq \begin{array}{ccccccc} & & y& & &=& -1 \\ \noalign{\ms} x&-&4y& & &=& 8 \\ \noalign{\ms} -x&+&2y&+&z &=& -1 \end{array}$$ Finish by back-substitution. The first equation is $y=-1$, whence the second equation yields $x = 4y+8 = 4$, whence the third equation yields $z = x-2y-1 = 5$. The system (\ref{f2003math315finaleqn1}) has a unique solution: $(x,y,z) = (4, -1, 5)$. \bs \it \ni 2. (a) (9 pts.) Let $\mathcal{S}$ and $\mathcal{T}$ be subspaces of some vector space $\mathcal{V}$. By $\mathcal{S} + \mathcal{T}$ we mean the set of all vectors of the form $\mathbf{x} + \mathbf{y}$ where $\mathbf{x} \in \mathcal{S}$ and $\mathbf{y} \in \mathcal{T}$. Prove that $\mathcal{S} + \mathcal{T}$ is a subspace of $\mathcal{V}$. \bs \rm \sol Suppose $\mathbf{u}, \mathbf{v} \in \mathcal{S} + \mathcal{T}$ and $c \in \mathbb{R}$. By definition of $\mathcal{S} + \mathcal{T}$, we can write $\mathbf{u} = \mathbf{s}_{1} + \mathbf{t}_{1}$ and $\mathbf{v} = \mathbf{s}_{2} + \mathbf{t}_{2}$ for some vectors $\mathbf{s}_{1}, \mathbf{s}_{2} \in \mathcal{S}$ and $\mathbf{t}_{1}, \mathbf{t}_{2} \in \mathcal{T}$. Since $\mathcal{S}$ and $\mathcal{T}$ are subspaces, we have $c \mathbf{s}_{1} \in \mathcal{S}$, $\mathbf{s}_{1} + \mathbf{s}_{2} \in \mathcal{S}$, $c \mathbf{t}_{1} \in \mathcal{T}$, and $\mathbf{t}_{1} + \mathbf{t}_{2} \in \mathcal{T}$. Therefore $\mathbf{u} + \mathbf{v} = (\mathbf{s}_{1} + \mathbf{s}_{2}) + (\mathbf{t}_{1} + \mathbf{t}_{2}) \in \mathcal{S} + \mathcal{T}$ and $c\mathbf{u} = c\mathbf{s}_{1} + c\mathbf{t}_{1} \in \mathcal{S} + \mathcal{T}$, which proves that $\mathcal{S} + \mathcal{T}$ is a subspace of $\mathcal{V}$. \bs \it (b) (9 pts.) Let $\mathcal{S}$ and $\mathcal{T}$ be subspaces of some vector space $\mathcal{V}$. Suppose that $\{\mathbf{s}_{1}, \mathbf{s}_{2}, \ldots, \mathbf{s}_{k}\}$ is a set of vectors in $\mathcal{S}$ that are linearly independent, and that $\{\mathbf{t}_{1}, \mathbf{t}_{2}, \ldots, \mathbf{t}_{\ell}\}$ is a set of vectors in $\mathcal{T}$ that are linearly independent. Prove that if $\mathcal{S} \cap \mathcal{T} = \{\mathbf{0}\}$ then $\{\mathbf{s}_{1}, \mathbf{s}_{2}, \ldots, \mathbf{s}_{k}, \mathbf{t}_{1}, \mathbf{t}_{2}, \ldots, \mathbf{t}_{\ell}\}$ is a linearly independent set of vectors. \bs \rm \sol Suppose \begin{equation} c_{1}\mathbf{s}_{1} + c_{2}\mathbf{s}_{2} + \cdots + c_{k}\mathbf{s}_{k} + d_{1}\mathbf{t}_{1} + d_{2}\mathbf{t}_{2} + \cdots + d_{\ell}\mathbf{t}_{\ell} = \mathbf{0} \label{fall2003math315finalexamprob2beqn1} \end{equation} for some scalars $c_{1}, c_{2}, \ldots, c_{k}, d_{1}, d_{2}, \ldots, d_{\ell}$. Then $$c_{1}\mathbf{s}_{1} + c_{2}\mathbf{s}_{2} + \cdots + c_{k}\mathbf{s}_{k} = - (d_{1}\mathbf{t}_{1} + d_{2}\mathbf{t}_{2} + \cdots + d_{\ell}\mathbf{t}_{\ell}) \in \mathcal{S} \cap \mathcal{T} \qq\mbox{and}\qq \mathcal{S} \cap \mathcal{T} = \{\mathbf{0}\}$$ imply that \begin{equation} c_{1}\mathbf{s}_{1} + c_{2}\mathbf{s}_{2} + \cdots + c_{k}\mathbf{s}_{k} = \mathbf{0} = - (d_{1}\mathbf{t}_{1} + d_{2}\mathbf{t}_{2} + \cdots + d_{\ell}\mathbf{t}_{\ell}). \label{fall2003math315finalexamprob2beqn2} \end{equation} By linear independence of $\{\mathbf{s}_{1}, \mathbf{s}_{2}, \ldots, \mathbf{s}_{k}\}$ and $\{\mathbf{t}_{1}, \mathbf{t}_{2}, \ldots, \mathbf{t}_{\ell}\}$, we infer from Equation~(\ref{fall2003math315finalexamprob2beqn2}) that every $c_{i}$ and every $d_{i}$ equals $0$. Since Equation~(\ref{fall2003math315finalexamprob2beqn1}) implies $c_{1} = c_{2} = \cdots = c_{k} = d_{1} = d_{2} = \cdots = d_{\ell} = 0$, the vectors $\{\mathbf{s}_{1}, \mathbf{s}_{2}, \ldots, \mathbf{s}_{k}, \mathbf{t}_{1}, \mathbf{t}_{2}, \ldots, \mathbf{t}_{\ell}\}$ are linearly independent. \bs \it \ni 3. (18 pts.) Let $\mathcal{V}$ be an inner product space with inner product $(\mbox{\hspace{0.63em}},\mbox{\hspace{0.63em}}): \mathcal{V} \times \mathcal{V} \la \mathbb{R}$. Fix a vector $\mathbf{v} \in \mathcal{V}$. (Do not make any additional assumptions about what $\mathcal{V}$ or $(\mbox{\hspace{0.63em}},\mbox{\hspace{0.63em}})$ or $\mathbf{v}$ is.) \bs (a) Decide whether the quoted statement is true or false: ``The function $f: \mathcal{V} \la \mathbb{R}$ defined by $$f(\mathbf{u}) = (\mathbf{u}, \mathbf{v})\qq \mbox{for each $\mathbf{u}\in \mathcal{V}$}$$ is a linear transformation from $\mathcal{V}$ to $\mathbb{R}$.'' (You must prove your answer to receive any credit.) \bs \rm \sol The statement is true. Suppose $\mathbf{u}_{1}, \mathbf{u}_{2} \in \mathcal{V}$ and $c \in \mathbb{R}$. By the additivity axiom of inner products, $(\mathbf{u}_{1} + \mathbf{u}_{2}, \mathbf{v}) = (\mathbf{u}_{1}, \mathbf{v}) + (\mathbf{u}_{2}, \mathbf{v})$. By the homogeneity axiom, $(c\mathbf{u}_{1}, \mathbf{v}) = c (\mathbf{u}_{1}, \mathbf{v})$. Hence $$f(\mathbf{u}_{1} + \mathbf{u}_{2}) = (\mathbf{u}_{1} + \mathbf{u}_{2}, \mathbf{v}) = (\mathbf{u}_{1}, \mathbf{v}) + (\mathbf{u}_{2}, \mathbf{v}) = f(\mathbf{u}_{1}) + f(\mathbf{u}_{2})$$ and $$f(c\mathbf{u}_{1}) = (c\mathbf{u}_{1}, \mathbf{v}) = c(\mathbf{u}_{1}, \mathbf{v}) = c\, f(\mathbf{u}_{1}),$$ proving that $f$ is a linear transformation. \bs \it (b) Decide whether the quoted statement is true or false: ``The set $$\left\{\mathbf{u} \in \mathcal{V} \; : \; (\mathbf{u}, \mathbf{v}) = 0\right\}$$ is a subspace of $\mathcal{V}$.'' (You must prove your answer to receive any credit.) \bs \rm \sol The statement is true, since the given set is the kernel of the function $f : \mathcal{V} \la \mathbb{R}$ defined in part~(a), and the kernel of any linear transformation is a subspace. \bs \it \ni 4. (13 pts.) Show that the vectors $$\mathbf{x}_{1} = \bm 1 \\ 2 \\ 1 \\ -1 \em, \qq \mathbf{x}_{2} = \bm 2 \\ 0 \\ 2 \\ 0 \em, \qq \mathbf{x}_{3} = \bm 1 \\ 1 \\ 1 \\ 1 \em, \qq \mathbf{x}_{4} = \bm 0 \\ 1 \\ 1 \\ 0 \em$$ form a basis for $\mathbb{R}^{4}$. \bs \rm \sol Since the matrix $$\bm 1 & 2 & 1 & 0 \\ 2 & 0 & 1 & 1\\ 1 & 2 & 1 & 1 \\ -1 & 0 & 1 & 0\em$$ has determinant $6$ (most easily seen by expanding along the fourth column), and any matrix with nonzero determinant has linearly independent columns, the vectors $\{\mathbf{x}_{1}, \mathbf{x}_{2}, \mathbf{x}_{3}, \mathbf{x}_{4}\}$ are linearly independent. Four linearly independent vectors in a $4$-dimensional vector space form a basis. \bs \it \ni 5. (18 pts.) Let $$A = \bm 1 & 3 & -3 \\ 6 & 4 & -9 \\ 4 & 4 & -7 \em.$$ \bs (a) Find all eigenvalues and eigenvectors of $A$. \bs \rm \sol We first compute all eigenvalues by finding the roots of the characteristic polynomial of $A$: \begin{eqnarray*} \mathrm{det}(A - \lambda I) &=& \det 1 - \lambda & 3 & -3 \\ 6 & 4 - \lambda & -9 \\ 4 & 4 & -7 - \lambda \edet \\ &=& \det 1 - \lambda & 2 + \lambda & -3 \\ 6 & -2 - \lambda & -9 \\ 4 & 0 & -7 - \lambda \edet \qq\mbox{(subtracting the first column from the second column)} \\ &=& \det 1 - \lambda & 2 + \lambda & -3 \\ 7 - \lambda & 0 & -12 \\ 4 & 0 & -7 - \lambda \edet \qq\mbox{(adding the first row to the second row)} \\ &=& -(2 + \lambda) \det 7 - \lambda & -12 \\ 4 & -7 - \lambda \edet \qq\mbox{(expanding the determinant along the second column)}\\ &=& -(2 + \lambda)(\lambda^{2} - 1) \\ &=& -(\lambda + 2)(\lambda + 1)(\lambda - 1). \end{eqnarray*} The eigenvalues are the roots of this polynomial: $\lambda_{1} = -2$, $\lambda_{2} = -1$, and $\lambda_{3} = 1$. Now we will find eigenvectors $\mathbf{v}_1$, $\mathbf{v}_2$, and $\mathbf{v}_3$ corresponding to the eigenvalues $\lambda_1$, $\lambda_2$, and $\lambda_3$. This is done by finding a basis for the nullspace of the matrix $A - \lambda_i I$ for each $i=1,2,3$. Remember that performing elementary row operations on a matrix does not alter its nullspace. An eigenvector $\mathbf{v}_1$ corresponding to the eigenvalue $\lambda_1=-2$ is a basis for the nullspace $$\begin{array}{ccccccc} \operatorname{NS} \bm 1 - \lambda_1 & 3 & -3 \\ 6 & 4 - \lambda_1 & -9 \\ 4 & 4 & -7 - \lambda_1 \em &=& \operatorname{NS}\bm 3 & 3 & -3 \\ 6 & 6 & -9 \\ 4 & 4 & -5 \em &=& \operatorname{NS}\bm 3 & 3 & -3 \\ 6 & 6 & -9 \\ 0 & 0 & 0 \em &=& \operatorname{NS}\bm 1 & 1 & -1 \\ 2 & 2 & -3 \\ 0 & 0 & 0 \em \\ \noalign{\ms} &=& \operatorname{NS}\bm 1 & 1 & -1 \\ 0 & 0 & -1 \\ 0 & 0 & 0 \em &=& \operatorname{NS}\bm 1 & 1 & 0 \\ 0 & 0 & -1 \\ 0 & 0 & 0 \em &=& \operatorname{Span}\left\{\bm -1 \\ 1 \\ 0 \em\right\}. \end{array}$$ An eigenvector $\mathbf{v}_2$ corresponding to the eigenvalue $\lambda_2=-1$ is a basis for the nullspace $$\begin{array}{ccccccc} \operatorname{NS} \bm 1 - \lambda_2 & 3 & -3 \\ 6 & 4 - \lambda_2 & -9 \\ 4 & 4 & -7 - \lambda_2 \em &=& \operatorname{NS}\bm 2 & 3 & -3 \\ 6 & 5 & -9 \\ 4 & 4 & -6 \em &=& \operatorname{NS}\bm 2 & 3 & -3 \\ 6 & 5 & -9 \\ 0 & 0 & 0 \em &=& \operatorname{NS}\bm 2 & 3 & -3 \\ 0 & -4 & 0 \\ 0 & 0 & 0 \em \\ \noalign{\ms} &=& \operatorname{NS}\bm 2 & 3 & -3 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \em &=& \operatorname{NS}\bm 2 & 0 & -3 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \em &=& \operatorname{Span}\left\{\bm 3 \\ 0 \\ 2 \em\right\}. \end{array}$$ An eigenvector $\mathbf{v}_3$ corresponding to the eigenvalue $\lambda_3 = 1$ is a basis for the nullspace $$\begin{array}{ccccccc} \operatorname{NS} \bm 1 - \lambda_3 & 3 & -3 \\ 6 & 4 - \lambda_3 & -9 \\ 4 & 4 & -7 - \lambda_3 \em &=& \operatorname{NS}\bm 0 & 3 & -3 \\ 6 & 3 & -9 \\ 4 & 4 & -8 \em &=& \operatorname{NS}\bm 0 & 1 & -1 \\ 2 & 1 & -3 \\ 1 & 1 & -2 \em &=& \operatorname{NS}\bm 2 & 1 & -3 \\ 0 & 1 & -1 \\ 0 & 0 & 0 \em \\ \noalign{\ms} &=& \operatorname{NS}\bm 2 & 0 & -2 \\ 0 & 1 & -1 \\ 0 & 0 & 0 \em &=& \operatorname{NS}\bm 1 & 0 & -1 \\ 0 & 1 & -1 \\ 0 & 0 & 0 \em &=& \operatorname{Span}\left\{\bm 1 \\ 1 \\ 1 \em\right\}. \end{array}$$ In summary, the eigenvalues of $A$ are $\lambda_{1} = -2$, $\lambda_{2} = -1$, and $\lambda_{3} = 1$; corresponding eigenvectors are $$\mathbf{v}_{1} = \bm -1 \\ 1 \\ 0 \em, \qq \mathbf{v}_{2} = \bm 3 \\ 0 \\ 2 \em, \qq\mbox{and}\qq \mathbf{v}_{3} = \bm 1 \\ 1 \\ 1 \em$$ (of course, any nonzero scalar multiple of $\mathbf{v}_{i}$ is also an eigenvector with eigenvalue $\lambda_{i}$). \bs \it (b) Find an invertible matrix $Q$ and a diagonal matrix $D$ such that $A = Q D Q^{-1}$. \bs \rm \sol The columns of $Q$ are the eigenvectors $\mathbf{v}_{1}$, $\mathbf{v}_{2}$, and $\mathbf{v}_{3}$ found in part~(a); the diagonal entries of $D$ are the eigenvalues $\lambda_{1}$, $\lambda_{2}$, and $\lambda_{3}$. Thus, $$A = \bm 1 & 3 & 1 \\ -1 & 0 & 1 \\ 0 & 2 & 1 \em \bm -2 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \em \bm 1 & 3 & 1 \\ -1 & 0 & 1 \\ 0 & 2 & 1 \em^{-1}.$$ \bs \it (c) Compute $A^{n}$ where $n = 10^{100}$. \bs \rm \sol Since $A = Q D Q^{-1}$ implies $A^{n} = Q D^{n} Q^{-1}$, by part~(b) we have \begin{eqnarray*} A^{10^{100}} &=& \bm 1 & 3 & 1 \\ -1 & 0 & 1 \\ 0 & 2 & 1 \em \bm -2 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \em^{10^{100}} \bm 1 & 3 & 1 \\ -1 & 0 & 1 \\ 0 & 2 & 1 \em^{-1} \\ &=& \bm 1 & 3 & 1 \\ -1 & 0 & 1 \\ 0 & 2 & 1 \em \bm 2^{10^{100}} & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \em \bm 1 & 3 & 1 \\ -1 & 0 & 1 \\ 0 & 2 & 1 \em^{-1} \\ &=& \bm 1 & 3 & 1 \\ -1 & 0 & 1 \\ 0 & 2 & 1 \em \bm 2^{10^{100}} & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \em \bm 2 & 1 & -3 \\ -1 & -1 & 2 \\ 2 & 2 & -3 \em % \\ % &=& \q = \q \bm 2\tothe{1}{1 + 10^{100}} - 1 & 2\tothe{1}{10^{100}} - 1 & - 3\cdot 2\tothe{1}{10^{100}} + 3 \\ \noalign{\ss} - 2\tothe{1}{1 + 10^{100}} + 2 & \; - 2\tothe{1}{10^{100}} + 2 \; & 3\cdot 2\tothe{1}{10^{100}} - 3 \\ \noalign{\ss} 0 & 0 & 1 \em. \end{eqnarray*} \bs \it \ni 6. (20 pts.) A matrix $A$ is called {\it idempotent} if $A = A^{2}$. \bs (a) Give an example of an idempotent $2$ by $2$ matrix whose four entries are all nonzero. \bs \rm \ni{\bf Solution~\#1.} Since a matrix similar to an idempotent matrix is clearly idempotent, one example is the matrix $$\bm 2 & 1 \\ 1 & 1 \em \bm 1 & 0 \\ 0 & 0 \em \bm 2 & 1 \\ 1 & 1 \em^{-1} = \bm 2 & 1 \\ 1 & 1 \em \bm 1 & 0 \\ 0 & 0 \em \bm 1 & -1 \\ -1 & 2 \em = \bm 2 & -2 \\ 1 & -1\em.$$ \bs \ni{\bf Solution~\#2.} Another example is any projection matrix $P = A (A^{t} A)^{-1} A^{t}$ that gives the orthogonal projection onto a line in $\mathbb{R}^{2}$ that is not horizontal or vertical: $$P = \bm a \\ b \em \( \bm a & b \em \bm a \\ b \em\)^{-1} \bm a & b \em = \frac{1}{a^{2} + b^{2}} \bm a \\ b \em \bm a & b \em = \bm \ds \frac{a^{2}}{a^{2} + b^{2}} & \ds \frac{ab}{a^{2} + b^{2}} \\ \noalign{\ms} \ds \frac{ab}{a^{2} + b^{2}} & \ds \frac{b^{2}}{a^{2} + b^{2}} \em$$ where $a$ and $b$ are nonzero. \bs \it (b) What can you say about an invertible idempotent matrix? \bs \rm \sol It must be the identity matrix $I$. If $A^{-1}$ exists, and $A^{2} = A$, then $A^{-1} A^{2} = A^{-1} A$, i.e.\ $A = I$. \bs \it (c) What are the possible eigenvalues of an idempotent matrix? \bs \rm \sol Zero and one. If $A \mathbf{v} = \lambda \mathbf{v}$ for some nonzero vector $\mathbf{v}$, then since $A = A^{2}$ we have $$\lambda \mathbf{v} = A \mathbf{v} = A^{2} \mathbf{v} = A(A \mathbf{v}) = A(\lambda \mathbf{v}) = \lambda (A \mathbf{v}) = \lambda^{2} \mathbf{v}.$$ Thus, $(\lambda^{2} - \lambda) \mathbf{v} = \mathbf{0}$, and since $\mathbf{v} \neq \mathbf{0}$ we see that $\lambda^{2} - \lambda = 0$, which means that $\lambda = 0$ or $\lambda = 1$. (Note that both possibilities can occur:\ see the examples given in part~(a).) \bs \it (d) Decide whether the quoted statement is true or false: ``If $A$ and $B$ are idempotent matrices of the same size, then $AB$ is an idempotent matrix.'' (You must prove your answer to receive any credit.) \bs \rm \sol The statement is false. One counterexample is $\,\ds A = \bm 1 & 1 \\ 0 & 0\em$,\, $\ds B = \bm 1 & 0 \\ 1 & 0 \em$,\, $\ds AB = \bm 2 & 0 \\ 0 & 0 \em$. \bs \it (e) Decide whether the quoted statement is true or false: ``Every idempotent matrix is diagonalizable.'' (You must prove your answer to receive any credit.) \bs \rm \sol The statement is true. If $A$ is an $n$ by $n$ idempotent matrix, then $A^{2} = A$ implies that $A\mathbf{v} = \mathbf{v}$ whenever $\mathbf{v}$ is a column vector of $A$. So the entire column space of $A$ is contained in the eigenspace corresponding to the eigenvalue $1$. The nullspace of $A$ is the eigenspace corresponding to the eigenvalue $0$. Let $\{\mathbf{v}_{1}, \ldots, \mathbf{v}_{k}\}$ be a basis for the column space of $A$; let $\{\mathbf{w}_{1}, \ldots, \mathbf{w}_{\ell}\}$ be a basis for the nullspace of $A$. By the Rank-Nullity Theorem, $k + \ell = n$. Because eigenspaces for distinct eigenvalues of a matrix have intersection $\{\mathbf{0}\}$,\footnote{{\it Not} because the column space and nullspace of a matrix have intersection $\{\mathbf{0}\}$! That is false for matrices in general. (The row space and nullspace of any matrix do have intersection $\{\mathbf{0}\}$, since the row space and nullspace are orthogonal.)} Problem~2(b) implies that $\{\mathbf{v}_{1}, \ldots, \mathbf{v}_{k}, \mathbf{w}_{1}, \ldots, \mathbf{w}_{\ell}\}$ are linearly independent. Thus, $\{\mathbf{v}_{1}, \ldots, \mathbf{v}_{k}, \mathbf{w}_{1}, \ldots, \mathbf{w}_{\ell}\}$ is a set of $n$ linearly independent eigenvectors of $A$, which shows that $A$ is diagonalizable. \bs \it \ni 7. (Extra Credit) \, As usual, let $\mathcal{C}[0,\, 1]$ denote the real vector space consisting of all continuous functions from the closed interval $[0,\, 1]$ to $\mathbb{R}$. Give an example of a linear transformation $T: \mathcal{C}[0,\, 1] \la \mathcal{C}[0,\, 1]$ with the property that $\dim(\ker T) = 2$. \bs \rm \ni{\bf Solution~\#1.} More generally, for any integer $n \geq 2$ we will exhibit a linear transformation $T: \mathcal{C}[0,\, 1] \la \mathcal{C}[0,\, 1]$ with $\dim(\ker T) = n$. Given $f \in \mathcal{C}[0,\, 1]$, define $T(f) \in \mathcal{C}[0,\, 1]$ by $$T(f)(x) = f(x) - g(x) \qq\mbox{where}\qq g(x) = \sum_{k=0}^{n-1} \( f\!\(\frac{k}{n-1}\) \cdot \frac{\ds \prod_{{\scriptstyle 0\leq m \leq n-1 \atop \scriptstyle m \neq k\tothe{4}{} }} \; \( x - \frac{m}{n-1}\)}{\ds \;\; \prod_{{\scriptstyle 0\leq m \leq n-1 \atop \scriptstyle m \neq k\tothe{4}{} }} \; \( \frac{k}{n-1} - \frac{m}{n-1}\)\tothe{5}{} \;\; }\).$$ Note that $g(x)$ is given by the Lagrange interpolation formula for the unique polynomial of degree less than $n$ that agrees with $f(x)$ at the $n$ different values $\,x = 0,\, 1/(n-1),\, 2/(n-1),\, 3/(n-1),\, \ldots,\, 1$. Consequently, if $f$ is a polynomial of degree less than $n$, then $g = f$, and so $f \in \ker T$. Thus, the set of all polynomials of degree less than $n$ is a subset of $\ker T$. On the other hand, if $f \in \ker T$ then $f(x) = g(x)$ for all $x \in [0,\, 1]$, which means that $f \in \mathcal{C}[0,\, 1]$ is a polynomial of degree less than $n$ (since $g$ is). Therefore, $\ker T$ is a subset of the set of all polynomials of degree less than $n$. We have shown that $\ker T$ equals the set of all polynomials of degree less than $n$, which is an $n$-dimensional vector space.\footnote{This construction is also possible in the case where $n = 1$; we can define $T: \mathcal{C}[0,\, 1] \la \mathcal{C}[0,\, 1]$ by $T(f)(x) = f(x) - f(0)$. And if we want $\dim(\ker T) = 0$, let $T$ be the identity transformation.} Specializing to the case $n=2$, the formula for the linear transformation $T: \mathcal{C}[0,\, 1] \la \mathcal{C}[0,\, 1]$ becomes\footnote{Notice that $\,\ds y = \Big[ f(1) - f(0) \Big] x + f(0)\,$ is the equation of the line segment connecting the endpoints of the graph of $f$.} $$T(f)(x) = f(x) - \bigg( \Big[ f(1) - f(0) \Big] x + f(0) \bigg),$$ and $T(f) = \mathbf{0}$ if and only if $f(x) = a x + b$ for some constants $a, b \in \mathbb{R}$. Thus, $\ker T$ equals the set of all such functions, a vector space of dimension $2$. \bs \ni{\bf Solution~\#2.} Still more generally, we can define a linear transformation $T: \mathcal{C}[0,\, 1] \la \mathcal{C}[0,\, 1]$ whose kernel has any prescribed dimension $\,\alpha \leq \le \dim(\mathcal{C}[0,\, 1])\ri\,$ (where we use $\le X \ri$ to denote the cardinality of the set $X$). In fact, we can even define a linear transformation $T: \mathcal{C}[0,\, 1] \la \mathcal{C}[0,\, 1]$ whose kernel equals any prescribed subspace $\mathcal{S} \subseteq \mathcal{C}[0,\, 1]$. Of course, $\mathcal{C}[0,\, 1]$ contains subspaces of every dimension up to $\le \dim(\mathcal{C}[0,\, 1])\ri$ (indeed, given $\alpha \leq \le \dim(\mathcal{C}[0,\, 1])\ri$, choose a basis\footnote{By Zorn's Lemma, every vector space has a basis.} for $\mathcal{C}[0,\, 1]$, pick any subset of this basis of cardinality $\alpha$, and define $\mathcal{S}$ to be the span of this subset of basis vectors). Given any subspace $\mathcal{S} \subseteq \mathcal{C}[0,\, 1]$, we construct a linear transformation $T: \mathcal{C}[0,\, 1] \la \mathcal{C}[0,\, 1]$ with $\ker T = \mathcal{S}$ as follows. Pick a basis $\{\mathbf{s}_{i}\}_{i \in I}$ for $\mathcal{S}$. Since any linearly independent set in a vector space can be enlarged to a basis, there exists a basis for $\mathcal{C}[0,\, 1]$ of the form $\{\mathbf{s}_{i}\}_{i \in I} \cup \{\mathbf{v}_{j}\}_{j \in J}$ (so for every $j \in J$ we have $\mathbf{v}_{j} \not\in \mathcal{S}$). Now define a linear transformation $T: \mathcal{C}[0,\, 1] \la \mathcal{C}[0,\, 1]$ by $$T(\mathbf{s}_{i}) = \mathbf{0} \q\mbox{for every $i \in I$}\qq\mbox{and}\qq T(\mathbf{v}_{j}) = \mathbf{v}_{j} \q\mbox{for every $j \in J$}$$ (recall that a linear transformation is fully determined by what it does to a basis).\footnote{In fact, we could let $T$ map the $\mathbf{v}_{j}$'s to any linearly independent set of vectors in $\mathcal{C}[0,\, 1]$ (e.g.\ $T$ could permute the $\mathbf{v}_{j}$'s arbitrarily). Thus, whenever $J$ is infinite (e.g.\ whenever $\mathcal{S}$ is finite-dimensional), there exist uncountably many different linear transformations $T: \mathcal{C}[0,\, 1] \la \mathcal{C}[0,\, 1]$ with kernel $\mathcal{S}$.} It is easy to verify that $\ker T = \mathcal{S}$. As a ``concrete'' example of a solution to the given problem, start with your favorite two (linearly independent) continuous functions. Say, take $f$ to be the function in $\mathcal{C}[0,\, 1]$ defined by $f(x) = e^{-1/x^{2}}$ for $x \neq 0$, and take $g$ to be any function in $\mathcal{C}[0,\, 1]$ with the property that for every $x \in [0,\, 1]$ the derivative $g'(x)$ does not exist.\footnote{For an example of such a function, see M. Spivak, {\it Calculus,} Second Edition (Publish or Perish, Inc., Houston, 1980), p.~475, or P.~Franklin, {\it A Treatise on Advanced Calculus} (John Wiley \& Sons, Inc., New York, 1940), pp.~146--147, Exercise~10.} The construction of the previous paragraph yields a linear transformation $T: \mathcal{C}[0,\, 1] \la \mathcal{C}[0,\, 1]$ whose kernel is the subspace $\{c_{1} f + c_{2} g \; : \; c_{1}, c_{2} \in \mathbb{R}\} \subseteq \mathcal{C}[0,\, 1]$, which is $2$-dimensional. \bs \ni\dotfill \bs \rm \begin{center} {\Large{\bf Remarks}} \end{center} Problem~2(a) was Exercise~32 of \S1.5, on p.~72 of Penney (assigned as part of Problem Set~5). If $\mathcal{S}$ and $\mathcal{T}$ are subspaces of $\mathcal{V}$, their intersection $\mathcal{S} \cap \mathcal{T}$ is always a subspace of $\mathcal{V}$; however, their union $\mathcal{S} \cup \mathcal{T}$ is not a subspace in general. Note that $\mathcal{S} + \mathcal{T}$ is the smallest subspace of $\mathcal{V}$ that contains $\mathcal{S} \cup \mathcal{T}$. In Problem~2(b), if the vectors $\{\mathbf{s}_{1}, \mathbf{s}_{2}, \ldots, \mathbf{s}_{k}\}$ span $\mathcal{S}$, and the vectors $\{\mathbf{t}_{1}, \mathbf{t}_{2}, \ldots, \mathbf{t}_{\ell}\}$ span $\mathcal{T}$, then it is easy to see that the vectors $\{\mathbf{s}_{1}, \mathbf{s}_{2}, \ldots, \mathbf{s}_{k}, \mathbf{t}_{1}, \mathbf{t}_{2}, \ldots, \mathbf{t}_{\ell}\}$ span $\mathcal{S} + \mathcal{T}$. So in the case where $\mathcal{S} \cap \mathcal{T} = \{\mathbf{0}\}$, Problem~2(b) shows that the union of a basis for $\mathcal{S}$ and a basis for $\mathcal{T}$ is a basis for $\mathcal{S} + \mathcal{T}$. This proves that if $\mathcal{S} \cap \mathcal{T} = \{\mathbf{0}\}$ then $\dim(\mathcal{S} + \mathcal{T}) = \dim(\mathcal{S}) + \dim(\mathcal{T})$. More generally, we always have $\dim(\mathcal{S} + \mathcal{T}) = \dim(\mathcal{S}) + \dim(\mathcal{T}) - \dim(\mathcal{S} \cap \mathcal{T})$. The easiest way to prove this is to pick a basis $\mathcal{B}$ for $\mathcal{S} \cap \mathcal{T}$, enlarge it to a basis $\mathcal{B} \cup \mathcal{B}'$ for $\mathcal{S}$, enlarge it to a basis $\mathcal{B} \cup \mathcal{B}''$ for $\mathcal{T}$, and show that $\mathcal{B} \cup \mathcal{B}' \cup \mathcal{B}''$ is a basis for $\mathcal{S} + \mathcal{T}$. Problem~4 was Example~4 of \S2.2, on p.~101 of Penney. Of course, it is easier to solve with our present tools than it was back in \S2.2. Regarding Solution~\#2 to Problem~6(a): in Problem Set~12 (Exercise~8 of \S4.5 of Penney), you proved both algebraically and geometrically that the projection matrix $P = A (A^{t} A)^{-1} A^{t}$ is idempotent. In Problem~6(d), the quoted statement becomes true if we add the condition that $AB = BA$. For in this case, $A^{2} = A$ and $B^{2} = B$ imply $(AB)^{2} = ABAB = A(BA)B = A(AB)B = A^{2} B^{2} = AB$. The solution to Problem~6(e) shows that {\it every} idempotent $2$ by $2$ matrix is either the identity matrix, the zero matrix, or else equals $$\bm a & b \\ c & d \em \bm 1 & 0 \\ 0 & 0 \em \bm a & b \\ c & d \em^{-1} = \frac{1}{ad - bc}\bm a & b \\ c & d \em \bm 1 & 0 \\ 0 & 0 \em \bm d & -b \\ -c & a \em = \bm \ds \frac{ad}{ad - bc} & \ds -\frac{ab}{ad - bc} \\ \noalign{\ms} \ds \frac{cd}{ad - bc} & \ds -\frac{bc}{ad - bc} \em$$ where $ad - bc \neq 0$. Throwing in the condition that $a$, $b$, $c$, and $d$ are all nonzero gives the set of all possible answers to Problem~6(a).\footnote{Can you exhibit Solution~\#2 as a special case of this?} In the extra-credit problem, we obviously cannot define $T: \mathcal{C}[0,\, 1] \la \mathcal{C}[0,\, 1]$ by $T(f) = f''$, since for an arbitrary continuous function $f : [0,\, 1] \la \mathbb{R}$, it is not necessarily the case that $f''$ is a continuous function from $[0,\, 1]$ to $\mathbb{R}$. In fact, as noted at the end of Solution~\#2 to the extra-credit problem, $f'$ might not even be a function from $[0,\, 1]$ to $\mathbb{R}$ at all: the value of $f'$ could be nonexistent at every single point of $[0,\, 1]$. Solution~\#2 to the extra-credit problem was based on an important algebraic property of vector spaces. If $\mathcal{V}$ is any vector space, and $\mathcal{S} \subseteq \mathcal{V}$ is any subspace, then there exists a subspace $\mathcal{T} \subseteq \mathcal{V}$ such that $\mathcal{S} + \mathcal{T} = \mathcal{V}$ and $\mathcal{S} \cap \mathcal{T} = \{\mathbf{0}\}$. (This property is known as {\it semisimplicity}.) \bs \ni\dotfill \bs \begin{center} {\Large{\bf Graded Exams}} \end{center} To view your graded final exam, go to 104 Ritter Hall between Jan.~13, 2004, and Jan.~13, 2005. The graded final exams will not be available before or after this range of dates. As announced in the syllabus, every course grade I assign (with the exception of the special grades ``I,'' ``NR,'' and ``X'') shall be considered final once the grade is recorded; I will request a change of grade, when warranted, if a computational or procedural error occurred in the original assignment of the grade, but a grade will not be changed based upon a re-evaluation of any student's work. \vfill \end{document}