Ok, recall our goal was to prove Helgason’s formula,

\displaystyle \boxed{ (d \exp)_{tX}(Y) = \left( \frac{ 1 - e^{\theta( - tX^* )}}{\theta(tX^*)} (Y^*) \right)_{\exp(tX)}.}  

and that we have already shown

\displaystyle {(d \exp)_{tX}(Y) f = \sum_{n=0}^{\infty} \frac{t^n}{(n+1)!} ( X^{*n} Y^* + X^{*(n-1)} Y^* X^* + \dots + Y^* X^{*n})f(p).} 

First of all, we have to do some somewhat messy Lie bracket work.

Lemma 1

In an associative ring {R}, fix elements {A,B} and let {S_n(A,B) := A^nB + A^{n-1}B A + \dots}, and let {L_A} be the operator {C \rightarrow [A,C]}. Then

  1. \displaystyle BA^n = \sum_{k=0}^n \binom{n}{k} A^k L_{-A}^{n-k} B . 
  2. \displaystyle S_n(A,B) = \sum_{k=0}^n \binom{ n+1 }{k+1} A^{n-k} L_{-A}^k B.  

 

I’ll defer the proof (see below).

Now from what we’ve shown, and the lemma applied to the ring generated by the vector field operators, we get

\displaystyle (d \exp)_{tX}(Y) f = \sum_{n}\sum_{k=0}^n \frac{1}{(k+1)!(n-k)!} (tX^*)^{(n-k)} \theta(-tX^*)^k Y^* f (p). 

If we treat {k,n-k} as separate variables {a,b}, i.e. change around the terms in the double sum (I’m only sketching the justification, which you can read about in detail in Helgason’s book, but the idea is that these are analytic functions and their derivatives grow at most factorially, so if {t} is very small we get absolute convergence), then one obtains

\displaystyle \sum_{a,b} \frac{1}{a! b! } (tX^{*})^a \theta(-tX^*)^{b-1} Y^* f(p). 

Finally, if we sum this first w.r.t. {a} and use the next-to-last boxed formula in the previous post (evaluating an analytic function on an exponential map’s image) we get

\displaystyle \sum_b \frac{1}{b!} \theta(-tX^*)^{b-1} Y^*f( \exp(tX)). 

That‘s Helgason’s formula. (Phew. Long computations don’t lend themselves all that well to blog posts.)

Now why does this matter? First of all, this formula can be used to prove that the sectional curvature (I haven’t defined it yet but should by the end of this month) of a 2-dimensional Riemannian (not necessarily analytic!) manifold is the same thing as the Gauss curvature. Second of all, it works for analytic Lie groups with their exponential map without the hypothesis of {t} being very small. Another possible future topic.

Proof of the Lemma

First we tackle 1. Set {Q_n(A,B) := BA^n} and {R_n} the right-hand-side of the equality claimed in 1. Then it is easily checked by the definition of the Lie bracket that {Q_n} satisfies

\displaystyle Q_n(A,B) = AQ_{n-1}(A,B) + Q_{n-1}(A, L_{-A}B). 

We will check that {R_n} satisfies an analogous relation. Since {R_0=Q_0}, the proof will then follow by induction. Now computing {AR_{n-1}(A,B) + R_{n-1}(A,L_{-A}B)} yields

\displaystyle \sum_{k=0}^{n-1} \binom{n-1}{k} A^{k+1} L^{n-k-1}_{-A}B + \binom{n-1}{k} A^k L^{n-k}_{-A} (B) 

which is

\displaystyle \sum_{k=0}^{n-1} \binom{n-1}{k} A^{k+1} L^{n-k-1}_{-A}B + \binom{n-1}{k+1} A^{k+1} L^{n-k-1}_{-A} (B) = R_n(A,B).  

We get 1 by Pascal’s formula. Actually, that was really just the binomial theorem argument. I made this longer than it had to be by trying to figure it out myself, but anyway we have to do something similar for 2, so writing this all out may not have been a bad idea.

Let the right-hand-side be {T_n}. Equality is true for {n=0}, so we will compute {S_n - A S_{n-1}} and {T_n - AT_{n-1}} to prove the lemma by induction.

\displaystyle S_n - A S_{n-1} = BA^n = Q_n(A,B). 

Now I claim that {T_n} works out the same way. Computing {T_n - A T_{n-1}} gives

\displaystyle \sum_k \left( \binom{n+1}{k+1} A^{n-k} L_{-A}^k B - \binom{n}{k+1} A^{n-k} L_{-A}^k B \right) = Q_n(A,B). 

Note that 1 was used in the proof of 2.

Advertisements