Under Construction!

Since I'm still building this page, I figured I ought to use the brilliance of MathJax to depict some of the most amazing formulae I've seen. If there are any mistakes or if my cursory explanations are hard to understand, feel free to contact me at

[an error occurred while processing this directive]

Elegance via Equality: Bott Periodicity


History

    Of course, I can't give a full-decription of Bott Periodcity, since I've only grasped the proof recently. So what is Bott Periodicity and why is it so important to determining how "division" works in higher-dimensions?

    One of the problems that has perplexed mathematicians for centuries is the idea of higher-dimensional division algebras . A division algebra in a few words is a mathematical object that has two operations, one commutative (i.e. addition) and one not (i.e. multiplication), such that the non-commutative operation has inverses.[1] This idea at first seems a bit far-fetched, since the idea of extending the concept of division in non-commutative space seems to bear unmotivated by the real-world. Fortunately, nature had a trick up her sleeve. In two-dimensions, it is relatively clear that rotations commute: If I rotate a pencil by $30^{\circ}$ and then again by $15^{\circ}$, then I get to the same point if I first rotate by $15^{\circ}$ and then by $30^{\circ}$. Unfortunately, we don't have the same luxury in three-dimensions. If you take an object and rotate in $45^{\circ}$ relative to a distinguished $x$-axis and then $45^{\circ}$ relative to a distinguished $y$-axis, you will find that swapping the order of rotations takes you to a different place. A slightly more mathematical way to look at this would be to consider the rotation matrices for the $x$-axis rotation and the $y$-axis rotation. If we have an orthonormal basis $(\hat{x},\hat{y},\hat{z})$ of $\mathbb{R}^3$, a rotation matrix $S \in \mathsf{GL}(3,\mathbb{R})$ will need to preserve orthonormality. The set of matrices that accomplishes this is called the orthogonal group, $\mathsf{O}(3,\mathbb{R})$. In terms of rotations, we would also like to have a rotation that preserves orientation. Physically this represents the fact that Newton's Laws implicitly choose a distinguished orientation (e.g. right-hand rule). Topologically, this says that we only want to consider the connected component containing the identity (the identity is an orientation-preserving map), since this is a subgroup. This subgroup is called the special orthogonal group, $\mathsf{SO}(3,\mathbb{R})$.

    Okay, so let's give a bit of concrete proof that rotations don't commute. Recall that a 2-dimensional rotation of an angle $\theta$ has the following $2\times 2$ representation, $$\left( \begin{array}{cc} \cos \theta & \sin \theta \\ -\sin \theta & \cos \theta \end{array} \right)$$ As such a rotation about the $x$-axis of an angle $\theta_1$ has a $3\times 3$ representation $A_x$, $$A_x := \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & \cos \theta_1 & \sin\theta_1 \\ 0 & -\sin\theta_1 & \cos \theta_1 \end{array} \right) $$ and a rotation about the $y$-axis of an angle $\theta_2$ has a $3 \times 3$ representation $A_y$, $$ A_y := \left( \begin{array}{ccc} \cos \theta_2 & 0 & \sin \theta_2 \\ 0 & 1 & 0 \\ -\sin\theta_2 & 0 & \cos \theta_2 \end{array} \right)$$ Now let's compute the matrix commutator $[A_x, A_y]:= A_x A_y - A_y A_x$. Note that, $$ A_x A_y = \left( \begin{array}{ccc} \cos \theta_2 & 0 & \sin \theta_2 \\ -\sin \theta_1 \sin \theta_2 & \cos\theta_1 & \sin\theta_1 \cos \theta_2 \\ -\cos\theta_1 \sin \theta_2 & -\sin \theta_1 & \cos\theta_1 \cos \theta_2 \end{array} \right)$$ and $$ A_y A_x = \left( \begin{array}{ccc} \cos \theta_2 & -\sin\theta_1 \sin \theta_2 & \cos \theta_1 \sin\theta_2 \\ 0 & \cos \theta_1 & \sin \theta_1 \\ -\sin \theta_2 & -\sin\theta_1 \cos \theta_2 & \cos \theta_1 \cos \theta_2 \end{array} \right) $$ Hence we have the commutator,[2] $$ [A_x, A_y] = \left(\begin{array}{ccc} 0 & \sin\theta_1 \sin\theta_2 & \sin\theta_2 \left(1 - \cos \theta_1 \right) \\ -\sin\theta_1 \sin \theta_2 & 0 & -\sin\theta_1 \left(\cos\theta_2 - 1 \right)\\ \sin\theta_2 \left(1 - \cos\theta_1 \right) & \sin\theta_1 \left(\cos\theta_2 - 1 \right)& 0 \end{array}\right) $$ If $\theta_1 = \theta_2 = 45^{\circ}$, as in our previous example, then the commutator doesn't vanish and as such the rotations do not commute. So what does this actually mean?

     In the late-1800s the famous Mathematician, Physicist and Polymath William Rowan Hamilton discovered that he could describe rotations in space by using a non-commutative division algebra that is now known as the Hamiltonian numbers, $\mathbb{H}$.

To be completed $\ldots$

Footnotes

  1. $\uparrow$ To those who have some experience with abstract algebra, this description sounds more like a ring and not an algebra. However the idea here is to equip a vector space (which has the additional invariance under scalar multiplication) with a product operation.
  2. $\uparrow$ Note that the commutator yields a skew-symmetric, traceless matrix. This implies that the commutator is contained in $\text{Lie}\left(\mathsf{SO}(3)\right) \cong \mathfrak{so}(3)$. This is a feature unique to Matrix Lie Groups. For instance see Hall, Lie Groups, Lie Algebras, and Representations, Chapter 1.