\pageno=224 \define\cent{\operatorname{Cent}} \define\ph-{\phantom{-}} \def\Q{\Bbb Q } \def\H{\Bbb H } %\input mathdef \heading{Appendix B. Skew Field Theory} \endheading \flushpar These notes have dealt almost exclusively with commutative rings, in part because of the greater ease in dealing with one-sided ideals than with two-sided ideals. However, noncommutative rings do arise naturally in many mathematical contexts, and should also be considered. In harmony with one of the themes of these notes, we shall apply certain noncommutative rings (namely skew fields) to elementary number theory, although there are several deep connections to geometry and other subjects that are beyond the scope of these notes. Our brief excursion into skew fields is motivated by the following theorem: \claim {Lagrange's Four Square Theorem} Every natural number is a sum of four squares. \endclaim The reader will notice at once the similarity to Fermat's theorem concerning primes of the form $a^2 +b^2 $, which equals $(a+bi)(a-bi)$ in $\Bbb Z [i]$. So we would like to find a similar way to factor $a^2 +b^2 +c^2 +d^2 $ Of course we need too many square roots of $-1$ to work in $\Bbb C$ (or indeed in any field), and instead must seek our solution in a skew field. (Recall the definition from Chapter 13.) In order to proceed, we need noncommutative generalizations of some of the notions described in earlier Chapters. But first we need to know more about the nature of commutativity. We say elements $a,b$ {\it commute} if $ab = ba.$ \rem {0} Suppose $a \in R$ is invertible. If $a$ commutes with $b$ in $R,$ then $a^{-1}$ also commutes with $b$. (Indeed, $a^{-1}b = a^{-1}baa^{-1} = a^{-1}aba^{-1} = ba^{-1}.$) \! Although arbitrary elements need not commute, certain elements (such as $0,1$) commute with every element of the ring. Define the {\it center} $\cent (R)$ of a ring $R$ to be $ \{ c \in R : cr=rc \text{ for all } r \text{ in } R \} $. Obviously, $\cent (R) $ is a commutative ring, which is a field if $R$ is a skew field, by Remark 0. Thus any skew field can be viewed as a vector space over its center. Generalizing Chapter 21, we say an $F$-{\it algebra} is a ring $R$ containing an isomorphic copy of $F$ in its center $C$, {\it i.e.}, there is an injection $F\to C.$ (There is a more abstract definition of algebra, given in Exercise 2, but this definition serves our purposes.) Perhaps the most familiar noncommutative algebra is $M_n(F),$ the algebra of $n \times n$ matrices over a field $F$, where the injection $F \to M_n(F)$ sends $\a $ to the scalar matrix $\a I.$ \subheading \nofrills{The Quaternion Algebra} \Definition 1 (Hamilton's Algebra of Quaternions.) Let $\Bbb H$ denote the two-dimensional vector space over $\Bbb C$, with base $\{ 1,j\} $; thus $\Bbb H = \Bbb C + \Bbb Cj = \Bbb R+\Bbb R i+\Bbb R j+\Bbb R k,$ where $k = ij$. We define multiplication on $\Bbb H$ via the rules $$ \align i^2 &= j^2 = k^2 = -1;\\ ij &= -ji = k;\\ jk &= -kj = i;\\ ki &= -ik = j. \endalign $$ (Note: The last three lines are encompassed by rule $ijk = -1.$ For example, $kj = -(-1)kj = -ijkkj = -i.)$ This multiplication extends via distributivity to all of $\Bbb H$, by $$(a_1 + b_1i+ c_1 j+d_1k)( a_2 + b_2i+c_2j+ d_2k) = a_3+b_3i+c_3j+d_3k$$ for $a_u, b_u, c_u, d_u \in \R ,$ where $$\align a_3 &= a_1 a_2 - b_1 b_2-c_1 c_2 -d_1 d_2; \\ b_3 &= a_1 b_2+ b_1 a_2 +c_1 d_2 -d_1 c_2; \\ c_3 &= a_1 c_2- b_1 d_2 + c_1 a_2 +d_1 b_2; \\ d_3 &= a_1 d_2 + b_1 c_2- c_1 b_2 +d_1 a_2 . \endalign $$ \! We have no guarantee {\it a priori} that $\Bbb H$ is an $\Bbb R$-algebra, although it is not difficult to verify the axioms directly. However, it is more elegant to view $\Bbb H$ as a subalgebra of an algebra we already know, namely $M_2 (\C )$. \Proposition 2 $\H $ is a skew field. \Proof There is a map $\varphi \colon \Bbb H \to M_2( \Bbb C)$ given by $y+zj \mapsto \left( \smallmatrix \ph-y & z\\ -\bar z & \bar y \endsmallmatrix \right) $. Writing $\hat i, \hat j, \hat k$ for the respective images of $i, j, k,$ we see that $\hat i = \left( \smallmatrix i & \ph-0\\ 0 & -i \endsmallmatrix \right)$, $\hat j = \left( \smallmatrix \ph-0 & 1\\ -1 & 0 \endsmallmatrix \right)$, and $\hat k = \left( \smallmatrix 0 & i\\ i & 0 \endsmallmatrix \right)$ and we have $ \hat i^2 = \hat j^2 = \hat k^2 = \hat i \hat j \hat k = -1.$ Thus $\varphi$~respects the defining relations for $\Bbb H$ and thereby preserves multiplication. $\ker \varphi = 0$ by inspection, so we conclude $\Bbb H$ is a ring, and $\varphi$ is a ring injection. Furthermore, let $d = \det \left( \smallmatrix \ph-y & z\\ -\bar z & \bar y \endsmallmatrix \right) = y \bar y+z \bar z$, a positive real number whenever $y$ or $z \ne 0$; then $$ \left ( \smallmatrix \ph-y & z \\ -\bar z & \bar y \endsmallmatrix \right ) ^{-1 } = d^{-1} \left ( \matrix \bar y & -z \\ \bar z & \ph-y \endmatrix \right ) ,$$ implying $(y+zj)^{-1} = d^{-1}(\bar y -\bar z j).$ Hence $\Bbb H$ is a skew field. \qed If it seems that we have pulled $\varphi $ out of a hat, see Exercise 4 for a more systematic approach. \demo{Digression} $\H$ has dimension 4 as a vector space over $\Bbb R,$ whence the name ``quaternions''; this algebra is closely related to the group of quaternions, which we studied earlier, {\it cf.} Exercise 18. For many years Hamilton had tried in vain to find a skew field of dimension 3 over $\Bbb R,$ although today it is rather easy to see that there is none, {\it cf.} Exercise 15. Indeed, there is a theorem (which we shall not prove here) that the dimension of a skew field over its center must be a square number. \enddemo \Remark 3 In the proof of Proposition 2 we defined a multiplicative map $N \colon \H \setminus \{ 0\} \to \Bbb R ^+ $ given by $$N(y+zj) = \det \pmatrix \ph-y &z\\ -\bar z &\bar y \endpmatrix = y \bar y+z \bar z.$$ ($N$ is multiplicative because $\det $ is multiplicative.) We can rewrite $N$ as $N(a+bi+cj+dk) = a^2 +b^2 +c^2 +d^2 $, which is a sum of four squares. \! \subheading \nofrills{Proof of Lagrange's Four Square Theorem} This discussion bears directly on Lagrange's Four Square Theorem, in the following manner. \Corollary 4 If $n_1$ and $n_2$ are sums of four square integers, then so is $n_1n_2$. \Proof Write $n_u = a_u^2+b _u^2 +c _u^2 +d _u^2 = N(a_u+b_ui+c_uj+d_uk)$ for $u = 1,2$. Then $n_1n_2 = N(q),$ where $$q = (a_1 +b_1i+ c_1 j+d_1k)( a_2 + b_2i+ c_2j+ d_2 k) . \quad \square $$ \! \Corollary 5 To prove Lagrange's Four Square Theorem it suffices to assume the natural number $n$ is prime. \Proof By induction applied to Corollary 4.\qed \Lemma 6 For any prime number $p,$ there is a number $n < \frac p 2 $ such that $np$ is a sum of three square numbers prime to $p$. \Proof One may assume $p$ is odd. In the field $\Bbb Z_p$ the sets $$\{ -[a]^2: -\frac p 2 < a < \frac p 2 \} \quad \text{and} \quad \{ [b]^2+1: -\frac p 2 < b < \frac p 2 \} $$ each take on at least $\frac {p+1}2$ distinct values, since any duplication comes at most in pairs. (In any field the equation $x^2 - t = 0$ has at most two solutions.) But $\Bbb Z_p$ has $p$ elements, so $-[a]^2 = [b]^2+1$ for suitable $a,b$, {\it i.e.}, $p$ divides $a^2 +b^2 +1$. Since $a^2 +b^2 +1 < 2(\frac {p-1} 2)^2 + 1$ one sees $n < \frac p 2 $.\qed The proof of Lagrange's Four Square Theorem can be concluded by induction on the smallest $n$ such that $np$ is a sum of four squares and is not too difficult, {\it cf.} Exercises 6 and 7. However, it is enlightening to recast the proof in terms of the structure of algebras, as was done in proving Fermat's theorem. We shall sketch the proof, leaving the verifications as exercises. We need a quaternion version of the Gaussian integers $\Bbb Z [i]$. The obvious candidate is $\Bbb Z +\Bbb Z i+\Bbb Z j+\Bbb Z k,$ where $i^2 = j^2 = k^2 = 1 = -ijk$. The first step then is to define a Euclidean algorithm analogous to the Gaussian integers, but the natural definition doesn't quite work for this ring. However, one can define instead the {\it ring of integral quaternions} $R = \{ a+bi+cj+dk: 2a,2b,2c,2d\in \Bbb Z \} $, and this indeed satisfies the Euclidean algorithm (Exercise 8). Consequently, as in the proof of Proposition 16.11 one concludes that every left ideal is principal. Now we obtain a noncommutative arithmetic on $R$ by defining an element $r$ to be irreducible whenever $Rr$ is a maximal left ideal and defining gcd($r_1,r_2$) to be that $r_3$ such that $Rr_1+Rr_2 = Rr_3$. One can conclude that an integral quaternion $q$ is irreducible iff $N(q)$ is a prime number in $\Bbb Z$ ({\it cf.} Exercise 10), from which it follows at once that any prime $p$ in $\Bbb Z$ can be written as $N(q),$ where $q$ is an irreducible integral quaternion dividing $p$. Hence, $4p = N(2q)$ is a sum of four squares in $\Bbb Z $, so we conclude with Exercise 6. Let us return to quaternion algebras. We could build a quaternion algebra $Q$ starting from any field $F$; namely take a four-dimensional vector space $F+Fi+Fj+Fk$ with multiplication defined by the relations $i^2 = j^2 = k^2 = -1 = ijk$. One could hope for $Q$ to be a skew field, but this is not the case when $F = \Bbb Z_p $. (If $a^2+b^2+c^2 = np$ in Lemma 6, then $ai+bj+ck$ is a zero-divisor in $\Z _p.$) More generally, every finite skew field is commutative, but this result needs some preparation. \subheading\nofrills {Polynomials over Skew Fields} \flushpar In this discussion, $D$ denotes a skew field having center $F$. Note that $x \in \cent (D[x]).$ If $f,g$ are polynomials, we say $g$ \underbar{divides} $f$ if $f = hg$ for some $h$ in $D[x]$. Also, given $f = \Sigma d_ix^i$ in $D[x]$ write $f(d)$ for $\Sigma d_id^i$, {\it i.e.}, ``right substitution'' for~$d$. (One must be careful about the side one substitutes, since $d$ need not commute with the $d_i$.) \Remark 7 Given $f \in D[x]$ and $d$ in $D,$ we have $$ f(x) = q(x)(x-d)+f(d) $$ for some $q$ in $D[x]$. In particular, $f(d) = 0$ iff $x-d$ divides $f(x)$. \! The proof follows from the Euclidean algorithm for polynomials (Proposition 16.7). Explicitly, if $f = \Sigma_{i=0}^n~d_ix^i,$ then let $g(x) = f-d_nx^{n-1}(x-d),$ of degree $< n$; one sees by induction on $n$ that $$g = q_1(x)(x-d)+g(d) = q_1(x)(x-d)+f(d),$$ implying $f = g+d_nx^{n-1}(x-d)= (q_1+d_nx^{n-1})(x-d)+f(d).$ \claim{Remark 8} Given $f = hg$, let us put $\bar f = h(x)g(d)$; writing $h = \Sigma d_i x^i,$ we see $\bar f = \Sigma d_ig(d)x^i$, so $\bar f(d) = \Sigma d_ig(d)d^i = f(d)$. \endclaim This simple computation provides our main tool: \claim{Proposition 9} With notation as in Remark 8, if $d$ is a root of $f$ but not of $g$, then $g(d)dg(d)^{-1}$ is a root of $h$. (Note that here we should assume $D$~is a skew field, to insure that $g(d)$ is invertible.) \endclaim \demo{Proof} By Remark 8, $\bar f(d) = f(d) = 0$. Hence, $x -d$ divides $\bar f = h(x)g(d)$; consequently $x-g(d)dg(d)^{-1}$ divides $g(d)(h(x)g(d))g(d)^{-1} = g(d)h(x)$ and thus divides $h$.\qed By a {\it conjugate} of $d$ in $D$ we merely mean an element of the form $ada^{-1},$ for suitable $a$ in $D$. Note that if $f(d)=0$ for $f = \sum _i\a _ix^i \in F[x]$ then $$f(ada^{-1}) = \sum _i\a _iad^ia^{-1}= a \sum_i \a _id^ia^{-1} = af(d)a^{-1} = a0a^{-1} = 0;$$ i.e. every conjugate of $d$ is also a root of $f$. The big difference between the commutative and noncommutative cases is that $d$ is its only conjugate when $D$ is commutative, whereas there can be an infinite number of distinct conjugates when $d \in D \setminus F.$ In fact, there are enough conjugates for the following theorem. \claim{Theorem 10} If $f(x) \in F[x]$ is monic irreducible and has a root $d_1$ in $D,$ then $f = (x-d_n) \dots (x -d_1)$ in $D[x]$, where each $d_i$ is a conjugate of $d_1$. \endclaim \demo{Proof} Let $g = x -d_1$, and write $f = hg. $ Any element $d = ad_1a^{-1} \neq d_1$ is a root of $f$, but not of $g,$ so applying Proposition 9 yields a root $d_2 = (d-d_1)d(d-d_1)^{-1}$ of $h$; we continue by induction on degree. To make sure the procedure does not break down, we require the following observation.\enddemo \demo{Remark 11} Assumptions as in Theorem 10, if $h \in D[x]$ with $\deg h \leq \deg f$ and every conjugate of $d_1$ is a root of $h$, then $h = f$. (Indeed, take a monic counterexample $h$ of minimal degree $< \deg f$; then each conjugate $ada^{-1}$ of $d$ is a root of the polynomial $aha^{-1}$, and thus of $h-aha^{-1}$, which has lower degree since both $h$ and $aha^{-1}$ are monic, a contradiction.) \qed Actually, Theorem 10 is only half a theorem. There is a sort of uniqueness to the factorization, as follows: \claim{Theorem 12} If $f = (x-d_n) \dots (x-d_1) \in D[x]$ and $f(d) = 0,$ then $d$ is a conjugate of some $d_i$; in fact, writing $f_i = (x-d_i)(x-d_{i-1}) \dots (x-d_1)$ and $f_0 = 1$, one has $d = f_i(d)^{-1}d_{i+1}f_i(d)$ for some $i$. \endclaim \demo{Proof} If $f_1(d) = 0$ then $d = d_1$. Hence, we may assume $f_1(d) \neq 0$. Take $i$ with $f_i(d) \neq 0$ and $f_{i+1}(d) = 0$. Letting $f = f_{i+1}$ and $g = f_i$ in Proposition~9 we see $f = (x-d_{i+1})g$, so $g(d)dg(d)^{-1}$ is a root of $x-d_{i+1}$, {\it i.e.}, is $d_{i+1}$ itself. Hence, $d = g(d)^{-1}d_{i+1}g(d).$ \qed \subheading \nofrills{Structure Theorems for Skew Fields} \flushpar\claim{Theorem 13} (Skolem-Noether Theorem) Suppose $a \in D$ is algebraic over~$F$. Any other root $b \in D$ of the minimal polynomial $f(x)$ of $a$ is conjugate to $a$. \endclaim \demo{Proof} Write $f = (x -d_n) \dots (x -d_1)$ by means of Theorem 10, so that each $d_i$ is conjugate to $a$, and apply Theorem 12 (since $f$ also is the minimal polynomial of $b$).\qed \rem {14} Another way of expressing this result is as follows: If $K/F$ is separable and $L \supset F$ is another subfield of $D$ that is $F$-isomorphic to $K$, then the isomorphism is given by conjugation by some element $d$ of $D$; in particular, $dKd^{-1} = L.$ (Indeed, by Exercise 23.3 we can write $K = F[a];$ letting $b\in L$ be the image under the isomorphism, we see that $a$ and $b$ have the same minimal polynomial, so $b = dad^{-1}$ for suitable $d$ in $D.$) It is possible to prove this result also for nonseparable extensions, but the proof will not be given here. \! Next, define the {\it centralizer } $C_R(S)$ of an arbitrary subset $S$ of a ring $R$ to be $\{ r \in R : rs=sr \text{ for all } s \text{ in } S \} $. It is easy to check that $C_R(S)$ is a subring of $R$, and in view of Remark 0, if $D$ is a skew field then $C_D(S)$ is a skew field. We are ready for a deep result of Wedderburn. \Theorem {15} (Wedderburn) Every finite skew field $D$ is commutative. \Proof Let $F = \cent (D),$ a finite field that thus has order $p^t$ for a suitable prime $p$ and some $t;$ let $K \supset F$ be a commutative subring of $D$ having maximal order. Then $K$ is a finite integral domain and thus a field, and being a vector space over $F$, has order $m=p^{u}$ for some multiple $u$ of $t$. Supposing $D$ noncommutative, we see $K \ne F,$ since $F[d]$ is commutative for any $d$ in $D$. By Theorem 26.8, $K/F$ is cyclic Galois, implying by the Galois correspondence that there is an intermediate field $F \subseteq L \subset K$ such that $[K:L] = p.$ Let $D' = C_D(L).$ By Theorem 13, the nontrivial automorphism of $K/L$ is given by conjugation by an element $d$ of $D$; clearly, $d \in D',$ implying $D'$ is not commutative. Let $n = |D'|.$ {\it A fortiori}, $K$ is a maximal commutative subring of $D'$; since $|L| = p^{u-1},$ we see $L = \cent (D').$ Any commutative subring $K'$ of $D'$ properly containing $L$ must also have order precisely $p^u = m$. (Indeed, $K'$ is a field extension of $L$, so $[K':L] \ge p,$ implying $|K'| \ge m \ge |K|$ so by hypothesis $|K'| = m.$) Now viewing $H = K\setminus \{ 0\} $ as a subgroup of the group $G = D'\setminus \{ 0\} $, we see that the number of subgroups conjugate to $H$ is at most $\frac {|G|}{|H|} = \frac {n-1}{m-1}.$ But each subgroup contains the element 1, so the number of conjugates of elements of $H$ in $D'$ is at most $\frac {(m-2)(n-1)}{m-1} +1 < n-1, $ implying some element $a$ of $D'$ is not conjugate to any element of $K$. Let $K' = F[a]$. As noted above, $K'$ is a field of order $m$. But any extension of $L$ of order $m $ is a splitting field of the polynomial $x^m-x$ over~$\Bbb Z_p$ and thus over $L$, and hence they all are isomorphic over $L$; thus $K$~and $K'$ are conjugate in $D'$ by Theorem 13, contrary to the assumption that $a$ is not conjugate to any element of $K$. \qed Are there skew fields other than the quaternions? It took algebraists another 50 years to produce a skew field other than Hamilton's. In fact, Frobenius proved $\Bbb H$ is the only noncommutative skew field whose elements all are algebraic over $\Bbb R$, {\it cf.} Exercise 15. If one is willing to consider rings with zero-divisors, one has matrix rings; then one can generalize parts of Proposition 2 and Remark 3, {\it cf.} Exercise~12. Actually, another glance at Example 2 reveals the key role played by matrices, and, indeed, rings of matrices are the focus of noncommutative algebra. But that starts another tale that must be told elsewhere. \subheading\nofrills{Exercises} \rostex \item $M_n(F)$ is an $F$-algebra, where we identify $F$ with the ring of scalar matrices ({\it cf.} Exercise 13.1). \item Define an {\it $F$-algebra} to be a $F$-vector space $R$ that also is a ring satisfying the following property, for all $\alpha$ in $F$ and $r_i$ in $R$ (where multiplication is taken to be the ring multiplication or the scalar multiplication of the vector space, according to its context): $$ \alpha (r_1r_2) = (\alpha r_1)r_2 = r_1(\alpha r_2).$$ Define the ring homomorphism $\varphi \colon F \to R$ given by $\alpha \mapsto \alpha \cdot 1$. Then $F\cdot 1$ is a subfield of $\cent (R)$, so we have arrived at the definition in the text. This definition is very useful in generalizing to algebras over arbitrary commutative rings, not necessarily fields. \item Show that the regular representation (Exercise 21.15) also works for noncommutative algebras. \item Derive Proposition 2 via the regular representation, viewing $\Bbb H$ as the vector space $\C +\C j$ and considering the matrices corresponding to right multiplication by $i,j,k$ respectively. \item The map $\bar{ }: \Bbb H \to \Bbb H$ given by $a+bi+cj+dk \mapsto a-bi-cj-dk$ is an anti-automorphism ({\it i.e.}, reversing the order of multiplication) of order 2. (Hint: $N(q) = q \bar q$.) \subhed {Proof of Lagrange's Four Square Theorem} The following exercises provide two proofs of Lagrange's Four Square theorem. \item If $2n = a^2 +b^2 +c^2 +d^2,$ then rearranging the summands so that $a \equiv b \pmod 2$ and $c \equiv d \pmod 2$ one can write $n$ as the sum ${(\frac{a+b}2)}^2 + {(\frac{a-b}2)}^2 + {(\frac{c+d}2)}^2 + {(\frac{c-d}2)}^2$. \item (Direct computational proof of Lagrange's Four Square Theorem.) Suppose $p$ is not a sum of four squares. Take $n > 1$ minimal, such that $np = a^2 +b^2 +c^2 +d^2$ is a sum of four squares; $n$ is odd by Exercise 6. Take the respective residues $a',b',c',d'$ of $a,b,c,d \pmod n$, each of absolute value $< \frac n2$. Then there exists $m$ such that $nm = (a')^2+(b')^2+(c')^2+(d')^2 < n^2$; hence, $m < n$. Since $N(a+bi+cj+dk) =np,$ note that $(a+bi+cj+dk)(a'-b'i-c'j-d'k)$ is a quaternion each of whose coefficients is congruent to $0 \pmod n$; dividing through by $n$ yields a quaternion whose norm is $ < np$, contradiction. \item Verify the Euclidean algorithm for the ring of integral quaternions. (Hint: Use a similar trick to that used for the Gaussian integers.) \item Any natural number $m>1$ is reducible as a quaternion. (Hint: Assume that $m$ is an odd prime. Then $m$ divides $1+a^2 +b^2$ for suitable integers $a,b$ prime to $m$. Let $q = 1+ai+bj$ and $d = $ g.c.d. ($m,q$) = $r_1m+r_2q$. $N(d-r_1m) = N(r_2q) = N(r_2)(1+a^2 +b^2 )$ is a multiple of $m$, implying $m|N(d)$. In particular, $d$ is not invertible, and $Rm \subset Rd$ since $q\in Rd$). \item Prove an integral quaternion $r$ is irreducible iff $N(r)$ is prime. (Hint: Take a prime divisor $p$ of $N(r)$ in $\Bbb Z $, and let $d$ = g.c.d.$(p,r)$. Then $p = N(d) = N(r)$.) \item Suppose $F$ is a field containing a primitive $n$th root $\rho$ of 1, and let $V$ be the $n^2$-dimensional vector space with base denoted by $\{ y^uz^v: 0 \le Êu,v < n\} $, made into a ring by the relations $y^n = 1 = z^n$ and $zy = \rho yz$. Prove that this is an algebra with center $F$, and has no proper ideals $\ne 0$. \item Using the regular representation, view $V$ of Exercise 11 as a subring of $M_n(F[y])$, by means of the determinant, and define the norm as in Remark 3. Now prove that if $t_1$ and $t_2$ each are sums of $n^2$ \linebreak $n$th powers of integers, then so is $t_1t_2$. \subhed {Frobenius' Theorem} The next three exercises comprise a quick proof of an interesting result of Frobenius. \item $C_R(K) = K,$ for any maximal commutative subring $K$ of a ring $R$. \item If $S,T \subseteq R,$ then $C_R(S \cup T) = C_R(S) \cap C_R(T).$ \item (Frobenius' Theorem.) The only skew fields $D$ algebraic over $\R $ are $\R ,\C ,$ and $\H .$ (Hint: Suppose $D \neq \R , \C.$ Any commutative subring of $D$ properly containing the center $F$ must be isomorphic to $\C .$ Thus $F = \R .$ Take a maximal commutative subring $C = F[i],$ where $i^2 = -1.$ Claim: Any element $a$ in $D \setminus C$ is contained in a copy of $\H $ that also contains $C$. Indeed, $F[a] \approx \C, $ and thus $F[a] = F[j]$ with $j^2 = -1.$ Let $c = ij+ji$. Then $c$ commutes with both $i$ and $j,$ and thus by Exercise 14 is in $F$. This proves that $D' = F +Fi +Fj +Fij$ is a subring of $D.$ Now let $b = ij-ji \in D'.$ Then $ib = -bi;$ replacing $j$ by $b$ enables us to assume $ij = -ji,$ and it follows readily that $D' \approx \H ,$ as claimed. If $D \supset \H $ then as above, $D$ contains another subring $D'' = F +Fi+Fj'+Fij'\approx \H ,$ where $ij' = -j'i$ and ${j'}^2 = -1;$ hence, $j'j^{-1}$ commutes with $i$, implying $j'j^{-1} \in F[i],$ or $j' \in F[i]j \subset D',$ contrary to $D'' \neq D'$.) \item Suppose $R$ is a ring containing a field $F$, and $n = [R:F] < \infty $. Then $R$ is algebraic over $F$, in the sense that any $r$ in $R$ satisfies a nontrivial equation $\sum _{ i=0}^n\alpha_ir^i = 0$ for $\alpha_i$ in $F$. \item Given any group $G$ and field $F,$ define the {\it group algebra} $F[G]$ that as vector space over $F$ has base $G$, and multiplication of the base elements is given by multiplication in $G$. Show that $Z(G) \subseteq \cent (F[G])$. \item Let $Q$ denote the quaternion group (with relations $a^4 = 1 = b^4$;\ $bab^{-1} = a^{-1};\ a^2 = b^2 $). Show that $\Bbb H$ contains a quaternion group (generated by $i,j$). Moreover, there is an algebra homomorphism $\R [Q] \to \Bbb H$ given by $a \mapsto i, \ b \mapsto j$; the kernel is $\langle a^2 +1 \rangle$. \item Generalizing Exercise 17, define the {\it monoid algebra} $F[M]$ for any monoid $M$. Show that $F[x]$ can be written as a monoid algebra. \endroster \end