I have now discussed what the Laplacian looks like in a general Riemannian manifold and can thus talk about the basic equations of mathematical physics in a more abstract context. Specifically, the key ones are the Laplace equation $\displaystyle \Delta u = 0$

for ${u}$ a smooth function on a Riemannian manifold. Since ${\Delta = \mathrm{div} \mathrm{grad}}$, this often comes up when ${u}$ is the potential energy function of a field which is divergence free, e.g. in electromagnetism. The other major two are the heat equation $\displaystyle u_t - \Delta u = 0$

for a smooth function ${u}$ on the product manifold ${\mathbb{R} \times M}$ for ${M}$ a Riemannian manifold, and the wave equation $\displaystyle u_{tt} - \Delta u = 0$

in the same setting. (I don’t know the physics behind these at all, but it’s probably in any number of textbooks.) We are often interested in solving these given some kind of boundary data. In the case of the Laplace equation, this is called the Dirichlet problem. In 2-dimensions for data given on a circle, the Dirichlet problem is solved using the Poisson integral, as already discussed. To go further, however, we would need to introduce the general theory of elliptic operators and Sobolev spaces. This will heavily rely on the material discussed earlier on the Fourier transform and distributions, and before plunging into it—if I do decide to plunge into it on this blog—I want to briefly discuss why Fourier transforms are so important in linear PDE. Specifically, I’ll discuss the solution of the heat equation on a half space. So, let’s say that we want to treat the case of ${\mathbb{R}_{\geq 0} \times \mathbb{R}^n}$. In detail, we have a function ${u(x)=u(0,x)}$, continuous on ${\mathbb{R}^n}$. We want to extend ${u(0,x)}$ to a solution ${u(t,x)}$ to the heat equation which is continuous on ${0 \times \mathbb{R}^n}$ and smooth on ${\mathbb{R}_+^{n+1}}$. To start with, let’s say that ${u(0,x) \in \mathcal{S}(\mathbb{R}^n)}$. The big idea is that by the Fourier inversion formula, we can get an equivalent equation if we apply the Fourier transform to both sides; this converts the inconvenience of differentiation into much simpler multiplication. When we talk about the Fourier transform, this is as a function of ${x}$. So, assuming we have a solution ${u(t,x)}$ as above: $\displaystyle \hat{u}_t = \widehat{\Delta u} = -4\pi^2 |x|^2 \hat{u}.$

Also, we know what ${\hat{u}(0,x)}$ looks like. So this is actually a linear differential equation in ${\hat{u}( \cdot, x)}$ for each fixed ${x}$ with initial conditions ${\hat{u}(0,x)}$. The solution is unique, and it is given by $\displaystyle \hat{u}(t,x) = e^{-4 \pi^2 |x|^2 t} \hat{u}(0,x).$ (more…)

The next basic tool we’re going to need is the theory of distributions.

Distributions

Distributions are extremely useful because they are both fairly general (including both all integrable functions and things like the Dirac delta function) but also allow for operations such as differentiation. So oftentimes we can obtain distribution solutions to differential equations we are interested in.

Actually, we’ll only discuss here tempered distributions. A tempered distribution is a linear functional ${\phi: \mathcal{S} \rightarrow \mathbb{C}}$. Clearly the tempered distributions form a vector space ${\mathcal{S}'}$; it is a locally convex space if we endow it with the weak* topology. It must now be seen how distributions generalize functions. So, if ${f \in L^p(\mathbb{R}^n)}$ for any ${p, 1 \leq p \leq \infty}$, then ${f}$ can be made into a distribution $\displaystyle w \rightarrow \int_{\mathbb{R}^n} wf dx, \ w \in \mathcal{S}.$

In fact, we could just assume that ${f}$ grows polynomially. In particular, we have an imbedding ${\mathcal{S} \rightarrow S'}$.

An example of a distribution that is not a function is the Dirac distribution ${\delta}$ mapping ${f \rightarrow f(0)}$. (more…)

This post’ll be pretty quick—the Plancherel theorem, a basic result on Fourier transforms, is a quick corollary of what I’ve already talked about.

We have shown that the Fourier transform is an isomorphism of ${\mathcal{S}}$ onto itself; the inverse is given by the inverse Fourier transform. The next step is to extend this to an isometry of ${L^2}$ onto itself. Since ${\mathcal{S}}$ is dense in ${L^2}$, it will be sufficient to show that $\displaystyle ||f||_2 = ||\hat{f}||_2$

for ${f \in \mathcal{S}}$. We will do this by proving the identity $\displaystyle (\hat{f},g) = (f, \tilde{g})$

for ${f,g \in L^2}$. This is a simple computation: $\displaystyle (\hat{f},g) = \iint f(y) e^{-2 \pi i x.y} \overline{ g(x)} dy dx = (f, \tilde{g}).$

In other words, the Fourier transform and its inverse are adjoints. If we take ${g = \hat{f}}$ and use the inversion formula, it becomes clear that the Fourier transform preserves the ${L^2}$-norm, whence follows

Theorem 1 (Plancherel) The Fourier transform extends to an isometry of ${L^2}$ onto itself.

Incidentally, for a ${L^2}$ function ${f}$, it is not necessarily true that $\displaystyle \hat{f}(x) = \int f(y) e^{-2 \pi i x.y} dy$

because that integral need not exist. It is, however, true that the integral will exist almost everywhere in a “principal value” sense, which we do not need to bother with here.

In a sense, this is a continuous analog of the Parseval theorem, which states that the Fourier coefficient map from ${L^2 \rightarrow l^2}$ (for ${l^2}$ the space of double-sided, square-summable sequences) is an isometry.

So, as I’ve already indicated, I’m planning to talk about PDEs for the next month or so, both the general theory and specific posts on the equations of mathematical physics.  There are some preliminaries I’ll have to do first such as Fourier transforms.  Today, I’ll get up to the inversion formula for Schwarz functions.

The Schwarz Class

The Schwarz class ${\mathcal{S}}$ consists of smooth functions ${f: \mathbb{R}^n \rightarrow \mathbb{R}}$ such that for all multi-indices ${\alpha=(\alpha_1, \dots, \alpha_n), \beta=(\beta_1, \dots, \beta_n)}$, $\displaystyle x^{\alpha} D^{\beta} f := x_1^{\alpha_1} \dots x_n^{\alpha_n} \left( \frac{\partial}{\partial x_1}\right)^{\beta_1} \dots \left( \frac{\partial}{\partial x_n}\right)^{\beta_n}f$

is bounded. For instance, the function ${e^{-|x|^2}}$ is in ${\mathcal{S}}$, as is any ${C^{\infty}}$ function with compact support. Elements in ${\mathcal{S}}$ are loosely speaking, functions that decrease rapidly at ${\infty}$ with all their partial derivatives.

There is a way to make the space ${\mathcal{S}}$ into a Frechet space, by the countable family of seminorms $\displaystyle ||f||_{a,b} := \sup |x^{a} D^{b} f(x) |.$ (more…)