I talked a bit earlier about nilpotent Lie algebras and Engel’s theorem. There is an analog for solvable Lie algebras, and the corresponding Lie’s theorem.

So, first the definitions. Solvability is similar to nilpotence in that one takes repeated commutators, except one uses the derived series instead of the lower central series.

In the future, fix a Lie algebra {L} over an algebraically closed field {k} of characteristic zero.

Definition 1 The derived series of {L} is the descending filtration {D_n} defined by {D_0 := L, D_n := [D_{n-1}, D_{n-1}]}. The Lie algebra {L} is solvable if {D_M=0} for some {M}.

For instance, a nilpotent Lie algebra is solvable, since if {\{C_n\}} is the lower central series, then {D_n \subset C_n} for each {n}.

A key example of a solvable Lie algebra is the subalgebra of {\mathfrak{gl}_n} consisting of upper-triangular matrices. When one takes the commutator the first time, the result is a nilpotent and strictly upper-triangular matrix. Taking further commutators, one eventually gets to zero. Lie’s theorem (together with Ado’s theorem that any Lie algebra has a finite-dimensional representation) says basically that every solvable algebra is of this form.

Theorem 2 (Lie) Let {L} be a solvable Lie algebra and {V} a representation. Then there exists a flag {0=V_1 \subset V_2 \subset \dots \subset V_n = V} of {L}-submodules, such that the successive quotients {V_{i+1}/V_i} are one-dimensional. In particular, using a basis corresponding to this flag, we have an action of {L} given solely by upper-triangular matrices.

First of all, it will be enough to find a single eigenvector {v} for the action of {L}; this will be our {V_1}, and we can proceed inductively to get the filtration. Lie’s theorem is often stated in this form; we’ll prove this version.

The proof is divided into numerous steps.

First, we need some kind of inductive step on the dimension of {L}, as in Engel’s theorem.

Claim 1 There is an ideal {I \subset L} such that {I} has codimension one in {L}.

We gave a separate proof for nilpotent Lie algebras, which no longer works, but we can give the following argument. {D_1 = [L,L] \neq L}, and {D_1} is an ideal (more generally, as is easily checked, when one takes the bracket of two ideals, one gets another ideal). Also {L/D_1} is abelian and nonzero, so we can take an ideal, i.e. a subspace, of codimension 1, and take the inverse image in {L}. This proves the claim.

Thus we can assume {L = I + k X} for some {X \in L}.

So, by the inductive hypothesis and the solvability of {I}, the subspace defined as

\displaystyle  \{ w \in V: \ w \mathrm{ \ is \ an \ eigenvector \ for \ the \ action \ of \ } I \}

is nonzero. We can thus find some {\lambda \in I^*} (dual space) such that

\displaystyle  V_\lambda := \{ v \in V : Yv = \lambda(Y) v \mathrm{ \ for \ } Y \in I \} \neq 0.

If we show {V_\lambda} is {X}-invariant, then we can find an {X}-eigenvector; this will be an eigenvector for all {L} and we will have proved our claim. So we must prove that if {v \in V_\lambda}, {Xw \in V_\lambda}, which is to say {l(Xv) = (constant)(Xv)}. But:

\displaystyle  l(Xv) = X(lv) + [l,X] v = \lambda(l) Xv + \lambda([l,X]) v.

If we get {\lambda([l,X])=0}, then we’ll be done.

There is a general lemma to this effect, which I will talk about tomorrow: the present week is the final one in RSI, during which we write our papers, so my posts at least will be shorter than normal.