I talked a bit earlier about nilpotent Lie algebras and Engel’s theorem. There is an analog for solvable Lie algebras, and the corresponding Lie’s theorem.
So, first the definitions. Solvability is similar to nilpotence in that one takes repeated commutators, except one uses the derived series instead of the lower central series.
In the future, fix a Lie algebra over an algebraically closed field
of characteristic zero.
Definition 1 The derived series of
is the descending filtration
defined by
. The Lie algebra
is solvable if
for some
.
For instance, a nilpotent Lie algebra is solvable, since if is the lower central series, then
for each
.
A key example of a solvable Lie algebra is the subalgebra of consisting of upper-triangular matrices. When one takes the commutator the first time, the result is a nilpotent and strictly upper-triangular matrix. Taking further commutators, one eventually gets to zero. Lie’s theorem (together with Ado’s theorem that any Lie algebra has a finite-dimensional representation) says basically that every solvable algebra is of this form.
Theorem 2 (Lie) Let
be a solvable Lie algebra and
a representation. Then there exists a flag
of
-submodules, such that the successive quotients
are one-dimensional. In particular, using a basis corresponding to this flag, we have an action of
given solely by upper-triangular matrices.
First of all, it will be enough to find a single eigenvector for the action of
; this will be our
, and we can proceed inductively to get the filtration. Lie’s theorem is often stated in this form; we’ll prove this version.
The proof is divided into numerous steps.
First, we need some kind of inductive step on the dimension of , as in Engel’s theorem.
Claim 1 There is an ideal
such that
has codimension one in
.
We gave a separate proof for nilpotent Lie algebras, which no longer works, but we can give the following argument. , and
is an ideal (more generally, as is easily checked, when one takes the bracket of two ideals, one gets another ideal). Also
is abelian and nonzero, so we can take an ideal, i.e. a subspace, of codimension 1, and take the inverse image in
. This proves the claim.
Thus we can assume for some
.
So, by the inductive hypothesis and the solvability of , the subspace defined as
is nonzero. We can thus find some (dual space) such that
If we show is
-invariant, then we can find an
-eigenvector; this will be an eigenvector for all
and we will have proved our claim. So we must prove that if
,
, which is to say
. But:
If we get , then we’ll be done.
There is a general lemma to this effect, which I will talk about tomorrow: the present week is the final one in RSI, during which we write our papers, so my posts at least will be shorter than normal.
Leave a Reply