This is the first in a series of posts about the Atiyah-Singer index theorem.

Let {V, W} be finite-dimensional vector spaces (over {\mathbb{C}}, say), and consider the space {\hom_{\mathbb{C}}(V, W)} of linear maps {T: V \rightarrow W}. To each {T \in \hom_{\mathbb{C}}(V, W)}, we can assign two numbers: the dimension of the kernel {\ker T} and the dimension of the cokernel {\mathrm{coker} T}. These are obviously nonconstant, and not even locally constant. However, the difference {\dim \ker T - \dim \mathrm{coker} T = \dim V - \dim W} is constant in {T}.

This was a trivial observation, but it leads to something deeper. More generally, let’s consider an operator (such as, eventually, a differential operator), on an infinite-dimensional Hilbert space. Choose separable, infinite-dimensional Hilbert spaces {V, W}; while they are abstractly isomorphic, we don’t necessarily want to choose an isomorphism between them. Consider a bounded linear operator {T: V \rightarrow W}.

Definition 1 {T} is Fredholm if {T} is “invertible up to compact operators,” i.e. there is a bounded operator {U: W \rightarrow U} such that {TU - I} and {UT - I} are compact.

In other words, if one forms the category of Hilbert spaces and bounded operators, and quotients by the ideal (in this category) of compact operators, then {T} is invertible in the quotient category. It thus follows that adding a compact operator does not change Fredholmness: in particular, {I + K} is Fredholm if {V = W} and {K: V \rightarrow V} is compact.

Fredholm operators are the appropriate setting for generalizing the small bit of linear algebra I mentioned earlier. In fact,

Proposition 2 A Fredholm operator {T: V \rightarrow W} has a finite-dimensional kernel and cokernel.

Proof: In fact, let {V' \subset V } be the kernel. Then if {v' \in V'}, we have

\displaystyle v' = UT v' + (I - UT) v' = (I - UT) v'

where {U} is a “pseudoinverse” to {T} as above. If we let {v'} range over the elements of {v'} of norm one, then the right-hand-side ranges over a compact set by assumption. But a locally compact Banach space is finite-dimensional, so {V'} is finite-dimensional. Taking adjoints, we can similarly see that the cokernel is finite-dimensional (because the adjoint is also Fredholm). \Box

The space of Fredholm operators between a pair of separable, infinite-dimensional Hilbert spaces is interesting. For instance, it has the homotopy type of {BU \times \mathbb{Z}}, so it is a representing space for K-theory. In particular, the space of its connected components is just {\mathbb{Z}}. The stratification of the space of Fredholm operators is given by the index.

Definition 3 Given a Fredholm operator {T: V \rightarrow W}, we define the index of {T} to be {\dim \ker T - \dim \mathrm{coker} T}.

This is equivalently {\dim \ker T - \dim \ker T^*}, so we see that the index is always zero for a self-adjoint operator. One can check various formal properties of the index, for instance that it is additive: {\mathrm{index}(TU) = \mathrm{index} T + \mathrm{index} U}.

In fact, the index turns out to be a homotopy invariant. We have:

Proposition 4 The index is locally constant on the space of Fredholm operators.

A consequence is that changing a Fredholm operator by a compact operator doesn’t affect the index, i.e. {\mathrm{index} T = \mathrm{index} T+ K} for {K} compact. That is because of the path {T + tK} connecting the two operators.

Elliptic operators

 The important case of Fredholm operators relevant to the index theorem will be given by elliptic operators on a compact manifold. Let {X} be a compact manifold, and {E, F} vector bundles on {X}. Let {D: \Gamma(E) \rightarrow \Gamma(F)} be a differential operator of some order {k} on sections of {E} to sections of {F}. In local coordinates, this means that {D} is just a matrix of (linear) partial differential operators.

Given {D}, we can associate a symbol or linearization

\displaystyle \sigma(D): \pi^* E \rightarrow \pi^* F

where {\pi: T^*X \rightarrow X} is the projection from the cotangent bundle. The idea is that the symbol of a differential operator {\sum a_\alpha \partial_\alpha} should be {\sum_{|\alpha| = k} a_\alpha x^\alpha}, but stating in terms of cotangent bundles is the invariant way of doing so. In order to do this, let’s choose a cotangent vector {v} lying over {x \in X}. To define the map

\displaystyle E_x \rightarrow F_x

associated to {v \in T^*_xX}, pick {e \in E_x} and extend e to a section defined in a small neighborhood of x. Choose a smooth function {f} such that {df (x)= v}, and consider

\displaystyle \frac{1}{k!} D( (f - f(x))^k e) ( x) \in F_x.

One can check that this is well-defined and gives a homomorphism of bundles. The idea is that to a differential operator of degree {k}, the symbol is some sort of linearization dependent on a cotangent vector (but homogeneous of degree {k} in that cotangent vector {v}).

Definition 5 The operator {D} is elliptic if {\sigma(D)} is an isomorphism outside the zero section.

The idea is that an elliptic operator is something which looks like the Laplacian {\Delta = \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2}} on {\mathbb{R}^2}, whose symbol is multiplication by {x^2 + y^2} and that is nonzero except at the origin.

The idea is that ellipticity is going to impose very strong properties on the operator {D}. For instance, elements of the kernel of {D} are going to be {C^\infty} sections; this is a special case of elliptic regularity. I don’t have a great explanation for this, but essentially the main point seems to be that one can choose a pseudo-inverse—a “parametrix”—for {D}. This will be an operator {D': \Gamma(F) \rightarrow \Gamma(E)} such that {DD'} and {D'D} are somehow close to the identity.

Now, of course, this is a bit tricky because {D'} would then presumably have to be a differential operator of negative order; how can you invert something like the Laplacian with another differential operator? One has to enlarge the space of operators and consider more generally pseudodifferential operators, which are allowed to have negative order. I don’t really want to get into all this here. However, here’s the idea:

Intuition: The map {D: \Gamma(E) \rightarrow \Gamma(F)} is something like a Fredholm operator, in that it admits a pseudoinverse {D'} such that {DD', D'D} are “close” to the identity.

Of course, {\Gamma(E), \Gamma(F)} are not Hilbert spaces, and one has to be more precise about one is actually talking about here. For our purposes, though, the intuition makes plausible the following facts.

Fact 1: The elliptic operator {D} has a finite-dimensional kernel and cokernel, and thus an index.

Fact 2: The index is invariant under continuous perturbations and thus by homotopies of elliptic operators.

There is this general idea in mathematics that invariants which are discrete and locally constant should be given by topological data. For instance, in the theory of algebraic curves, there is essentially one discrete invariant: the genus (and then a continuous family of curves within that genus). But the genus is of course purely topological. So, now we have this invariant of elliptic operators on manifolds given by the index, and we’ve seen that it is discrete and locally constant. One might thus wonder whether using topological data, one might compute the index.

The Atiyah-Singer index theorem states that this is true, and that in fact, to compute the index, one has to take the element in K-theory (of {T^*X}) defined by the symbol {\sigma (D)}, take its Chern character, take a characteristic class of the manifold {X}, multiply, and evaluate on the fundamental class. More transparently, the index theorem defines two homomorphisms

\displaystyle K(T^*X ) \rightarrow \mathbb{Z}

from the {K}-theory of {T^*X} for a compact manifold {X} to {\mathbb{Z}}, one in terms of the indices of elliptic operators and one in terms of topological data (the Thom isomorphism, essentially), and states that they are equal.