I’m going to get back eventually to the story about finite-dimensional modules, but for now, Lie algebras are more immediate to my project, so I’ll talk about them here.

From an expository standpoint, jumping straight to {\mathfrak{sl}_2} basically right after defining Lie algebras was unsound. I am going to try to motivate them here and discuss some theorems, to lead into more of the general representation theory.

Derivations

So let’s consider a not-necessarily-associative algebra {A} over some field {F}. In other words, {A} is a {F}-vector space, and there is a {F}-bilinear map {A \times A \rightarrow A}, which sends say {(x,y) \rightarrow xy}, but it doesn’t have to either be commutative or associative (or unital). A Lie algebra with the Lie bracket would be one example.

The notion of derivations in an algebra is a generalization of derivatives:

Definition 1 A derivation of {A} is a linear map {D: A \rightarrow A} such that for all {a,b \in A},

\displaystyle   D(ab) = D(a)b+aD(b). \ \ \ \ \ (1)

If {A} has a two-sided unit, then {D(1) = D(1 \cdot 1) = 1 D(1) + D(1) 1 = 2 D(1)}, so {D(1) = 0}.

So, for instance, if {A} is a polynomial ring {F[x_1, \dots, x_n]} (which is an algebra), and {D} is a partial derivative {\frac{\partial}{\partial x_i}} with respect to some variable {x_i}, then {D} is a derivation of {A}: (1) is the product rule.

Lie algebras and derivations are closely intertwined:

Example 1 In a Lie algebra {L}, for {x \in L} fixed, the map {\text{Ad } x: L \rightarrow L}, {\text{Ad } x(y) = [x,y]}, is a derivation. This is just the Jacobi identity.

We can go the other way too:

Proposition 2 Let {A} be an algebra. Then the set {Der(A)} of derivations of {A} (which is a vector space, as can be checked directly), is a Lie algebra, under the bracket

\displaystyle  [D_1, D_2](x) = D_1 (D_2(x)) - D_2(D_1(x)).

This is a matter of computation. The bracket is clearly bilinear and satisfies {[D, D]=0}. The Jacobi identity takes a bit more work to check, but it follows from the definitions too.

Here is a general remark:

Using this, we can make the collection of smooth vector fields on a {C^\infty} manifold into a Lie algebra. Indeed, a vector field is the same thing as a derivation of the space of smooth functions—an algebra under pointwise multiplication—by a general theorem, so we can take a Lie bracket. On a Lie group, moreover, we can look at left-invariant vector fields, and get the corresponding Lie algebra associated to the Lie group.

Basic Constructions

We’ll give a couple of basic constructions on how to construct new Lie algebras from old ones:

Example 2 If {L, L'} are Lie algebras, then {L \oplus L'} is a Lie algebra in a natural way.

Example 3 If {L} is a Lie algebra and {I} a Lie ideal, i.e. {x \in I, y\in L} implies {[x,y] \in I}, then we can make the quotient vector space {L/I} into a Lie algebra in a natural fashion.

There is also a slightly more complicated construction that amalgamates a Lie algebra and a derivation:

Example 4 Let {L} be a Lie algebra and {D} a nonzero derivation. Then the vector space {L \oplus F D} (where {F D} is just the one-dimensional space spanned by {D}) can be made into a Lie algebra containing {L} as a subalgebra. as follows. If {x,y \in L}, define {[(x,0), (y,0)] = ([x,y],0)}. If {aD, bD \in FD}, then {[ (0, aD), (0, bD) ] = 0}. Finally, if {x \in L, aD \in FD}, {[ (x, 0), (0, aD)] = ( a Dx , 0)}. We need to check that with this, {L \oplus FD} is actually a Lie algebra. The fact that the bracket is bilinear and alternating follows from the definition. To check the Jacobi identity, which is bilinear in the three variables {l,l',l''}, one reduces to checking individual cases, e.g. {l \in L, l' \in L, l'' \in D}.

The Adjoint Representation

While we’re covering the basic definitions and facts about Lie algebras, representation theory is our ultimate goal. Here I want to describe an easy way of getting a representation of a Lie algebra.

Definition 3 Suppose {L} is of dimension {n}. Then the adjoint representation {\text{Ad }: L \rightarrow \mathfrak{gl}_n} sends {x \rightarrow \text{Ad } x}.

Recall from before that {\text{Ad } x} is the linear transformation of {L} such that {(\text{Ad } x)(y) = [x,y]}. We need to check this is actually a representation, i.e.

\displaystyle  (\text{Ad } x)(\text{Ad } y) - (\text{Ad } y)(\text{Ad } x) = \text{Ad } [x,y],

for which we evaluate at some {z \in L}, so that our expression to prove becomes

\displaystyle  [ x, [y, z]] - [y, [x,z]] = [[x,y],z],

which follows from a combination of the Jacobi identity and antisymmetry.

The adjoint representation is especially nice when it’s injective, or faithful; then we’ve found a way to embed {L} as a Lie subalgebra of some {\mathfrak{gl}_n}. This will happen e.g. when {L} is semisimple (more about that later).

So, with these basic facts covered, it’s time to start looking at representations in the general case. Next up should be representations of nilpotent algebras, i.e. Engel’s theorem.