Self Imposed Confusion with Lie algebra Homomorphisms
This lunacy took up the better part of an afternoon so I had to share it with you.
I’ve been working through the elementary details of Lie algebras lately. The Laboratory newsletter is precisely that: a rough draft of my lecture notes.
I was really unhappy with my prior treatment of soluble Lie algebras, and have been struggling to piece the story together coherently. The biggest offender so far was also the silliest.
Today I’d like to tell the story of how I confused myself - for an egregious amount of time - with endomorphisms. I’ll share exactly how I got my feeble brain into a rut that I couldn’t shake for hours, and at the end I’ll share the sentence that brought about spontaneous enlightenment1. Be warned, this is a real example of a normal person getting hung up an extremely elementary ideas.
Morphisms, Homo and Endo
Definition: Let 𝔤 and 𝔥 be finite-dimensional, complex Lie algebras. A linear map M between them is called a Lie algebra homomorphism if it preserves the Lie bracket. Concretely, we may write
for all x,y in 𝔤.
Senseless confusion came for me when M was an endomorphism, that is, when 𝔥 = 𝔤. More precisely, I was considering isomorphism classes of low-dimensional Lie algebras.
The Two-Dimensional Lie Algebra
There is a unique, nonabelian Lie algebra in two-dimensions. It’s easy to understand it’s properties.
Let 𝔤 be a such an algebra. Amongst any collection of vectors in 𝔤, there will be at most two linearly independent ones. Hence there will be at most a single, linearly independent Lie bracket between them.
The Lie bracket, in other words, defines a specific vector in 𝔤. That vector alone spans [𝔤,𝔤]. It is convenient to choose a basis that aligns with this structure. Let y be the vector that spans [𝔤,𝔤]. Then we simply choose a linearly independent vector x so that the Lie bracket is normalized to y
How could this Lie algebra fail to be unique?
By definition a Lie bracket on 𝔤 must be a linear map, hence the most general thing we can write down is
For 𝔤 to be unique up to isomorphism means that there must exist an homomorphism that converts this general Lie bracket to the particular one mentioned above.
Of course, we can always choose representative vectors in 𝔤. But for a concrete construction this amounts to a linear map. While all Lie algebra homomorphisms are linear maps, there is no reason for the converse to be true.
However, if we can find a subset of linear maps that are homomorphisms, we can ask if that subset is enough to prove uniqueness.
The Group of Endomorphims
The group GL(𝔤) represents the invertible, linear endomorphisms2 on 𝔤. These amount to matrices,
subject to the constraint that
Is there a subgroup Hom(𝔤) of GL(𝔤) that satisfies the conditions for a Lie algebra homomorphism:
If so, what does the space of all such transformed Lie brackets look like?
Certainly the identity element 1 of GL(𝔤) counts as a homomorphism. So we know Hom(𝔤) exists, at least trivially. Let’s see how expansive it is. The left hand side of the homomorphism criterion gives
Including the right hand side,
Hence we can rephrase the isomorphism criteria as an inverse eigenvalue problem,
Inverse in the sense that we have the eigendata, but are looking their associated matrix.
Let Q denote the scaled matrix M / det M, and let us assume that det M is not unity3. Hence
Q has two eigenvalues, and we’ve agreed that the first is 1. Let γ be the second.
Exercise: Prove that [x,y] is an eigenvector of M with eigenvalue 1/γ.
From the result of the exercise, we see that the other eigenvalue of M is 1.
This section is gratuitous on purpose. Unless you’re super jazzed to see the gory details, you can skip it without losing anything.
Parametrizing Hom(𝔤)
The characteristic equation of M reads
We know this equation is satisfied for both 1 and 1/γ. The former gives a constraint
In terms of γ,
The latter gives the same relation.
The eigenvectors of M are
with eigenvalues,
respectively4.
We have constructed M specifically so that
To transform from the (x,y) basis to the eigenvector basis we need the similarity transformation
Let us verify that S is invertible:
Hence we have
In particular, so long as det S is nonvanishing, we can simply rescale the other eigenvector
to find
In short, to each arbitrary Lie bracket on 𝔤 we have identified a family5 of homomorphisms of 𝔤 whose spectrum gives us a basis for 𝔤 that naturally obey the desired Lie bracket.
My Lunacy, Described
This concrete construction is totally asinine. I simply restated our early argument without the elegance. This is where I got hung up for just a little too long. You see, it should surprise no one to learn that any Lie algebra 𝔤 is homomorphic to itself.
Let me say that a little louder.
Every Lie algebra is isomorphic to itself.
The issue I had unclear in my head involved how the Lie bracket transformed under general linear maps. Here’s the thing: you can use a particular basis of 𝔤 to define a Lie bracket, but a change of basis doesn’t imply a change in that bracket. Indeed, it’s bilinear! That’s kind of the whole point.
To derive this more explicitly, let 𝔤 be any finite-dimensional Lie algebra and let e(i) form a basis for 𝔤. We can define the Lie algebra concretely using the so-called structure constants
Consider an arbitrary element M of GL(𝔤) acting on these basis elements:
The bilinearity of the bracket gives
We can then impute the structure constants,
Of course we’d like to study the bracket in terms of the transformed basis, hence we can insert the identity matrix
to find
That is to say, under a general linear transformation M, the structure constants transform as a tensor:
TL;DR
Linear endomorphisms on a Lie algebra 𝔤 need not be homomorphisms. We are always entitled to change our basis on 𝔤 so long as we don’t explicitly change the Lie bracket. Defining it once - using structure constants, say - is sufficient.
In practice, we typically reduce our calculations to our defining basis anyway.
Of course the adjoint map scales with its representative member. That is literally what a vector is designed to do.
Sometimes we get ideas confused in our head and have to run with them to the bitter end. As a penance for this madness, I wrote this post to tell you all about it. If that’s not what a test blog is for I don’t know what is!
Appendix: The “Defective” Case
And now the case where det M = 1. Studying this is certainly overkill, but it is both a fun exercise in linear algebra as well as useful for later discussions.
For the case of det M = 1, we have a stronger constraint on the matrix elements. First, the characteristic equation of M reduces to
We can use this to fix d. The determinant itself then fixes c,
Under these constraints the two eigenvectors we’ve studied for other values of det M degenerate. As a result, the similarity transformation picks up a nullity of 1 and our prescription fails. M is sometimes called a “defective” matrix.
As we’ve seen previously, if b or c vanish, then an and d are fixed at unity. If they don’t vanish, you could write M as
As we just mentioned, both of the previous eigenvectors degenerate to v_1, which remains an eigenvector of M with eigenvalue 1. We can therefore compute the generalized eigenvector v_g by solving,
This yields
The similarity transformation from the x,y basis is now:
The determinant of S is then
If we rescale v_g by det S and identify v_1 with [x,y], we find
Once again, we end up with the unique, two-dimensional nonabelian Lie algebra is canonical form.
That is to say, I’ll share a trivial fact.
In case it isn’t obvious, the group operation is just matrix multiplication.
Everything we have to say will assume a fixed value of det M, so in some sense we’re working with a projective representation of Hom(𝔤). That said, problems arise for the case when det M = 1. Because we are enforcing the additional requirement that one of the eigenvalues of M is unity, we necessarily end up with a multiplicity of 2. Since any non vanishing value of γ will work for our immediate purpose, we can punt the interesting case of det M = 1 to a different discussion.
Note that these hold for any values of investable M. In particular, when b or c vanish. In either of those cases, d is necessarily restricted to 1.
The family is parametrized by γ, where γ is neither 0 nor 1.