LOG#147. Path integral (II).

700px-Gaussian_Filter.svg

Are you gaussian? Are you normal?

My second post about the path integral will cover functional calculus, and some basic definitions, properties and formulae.

What is a functional? It is a gadget that produces a number! Numbers are cool! Functions are cool! Functionals mix both worlds.

Let me consider a space of functions, not necessarily a normed or metric space. For instance, you can take the space of continuous functions, the space of derivable (differentiable) functions, the space of integrable functions, more complex spaces like L^2(R^n) (the space of squared integrable functions, and so on!

Definition 1. (Functional). A functional I is a map or correspondence between some (subset of) space and numbers. It is a “machine” that allows you to pick a number when some function is selected in some way. That is:

I: F\longrightarrow \mathbb{R}(\mathbb{C})

f(x)\rightarrow I(f)=I(f(x))

A functional I(f) can be consider a function with an infinite number of variables, the infinite set of values of the functions at every point. That is, a functional is some kind of \infty-dim vector! Usually, nD vectors have only a finite number of components, i.e., they are nD arrays:

u=u(x_1,x_2,...,x_n)

Functionals are functions over \infty D objects/arrays!

f\sim (f_1,f_2,\ldots,f_n,\ldots)

Example: a single indefinite integral IS a functional. That is,

\displaystyle{I(f)=\int dx f(x)=\lim \sum_n f(x_n)\Delta x_n\sim I(f_1,f_2,\ldots)}

This (quite general) definition bring us some issues. Usually we can NOT identify a space of functions with a countable (even infinite) set. For instance, through the existence of a countable basis as in a separable Hilbert space. However, usually practical advances can be done if we put suitable restrictions in the space of functions, which tame the potentially dangerous infinities. These restrictions can be:

1st. Fourier bounds and Fourier transformations, asking for periodicity in momentum space.

2nd. Fourier coefficients definiteness. That is, we require functions to be periodic to some extent.

3rd. Analytical functions. Taylor or Laurent coefficients. Asking a function to be analytic solves many ill-posed problems.

Most of the functionals of interest in Physics can be expanded in the following way:

\displaystyle{I(f)=\sum_n\dfrac{1}{n!}\int dx_1dx_2\ldots dx_n F_n(x_1,\ldots,x_n)f(x_1)\cdots f(x_n)}

where F_n are normal functions of an increasing, finite, number of variables. This decomposition is “cluster-like” and it can be found in Quantum Field Theory (QFT) lots of times!

Exercise (for eager readers). Give additional examples of functionals. Post them here! 🙂

Other main concept we need to discuss is the notion of functional derivative. Mathematicians are weird people. They define and clasify derivatives! LOL Beyond the usual notions of derivatives (classical calculus), you find Gateaux derivatives, Fréchet derivatives, fractional derivatives, and many others. Here, I will not be focused on the features these specific derivatives have, and I am going to be deliberately ambiguous. In general, a derivative is ANY operation (operator) \delta which satisfies the so-called Leibniz rule of product derivation. That is:

\delta (ab)=(\delta a) b+ a(\delta b)

Intuitively, the generic definition for the derivative of any map a(x) can be written at formal level as a ratio:

\dfrac{\delta a}{\delta x}\sim \dfrac{a(x)-a(y)}{x-y}

whenever \vert x-y\vert\sim 0. However, the definition of an (useful) distance \vert x\vert in a general space of functions is a non-trivial task in general.

Definition 2. Directional derivative. The directional derivative of the function I(f) along some function \lambda (x) is defined as:

\displaystyle{D_\lambda I(f)=\dfrac{\delta_\lambda I(f)}{\delta f}=\lim_{\varepsilon\rightarrow 0}\dfrac{I(f+\varepsilon \lambda)-I(f)}{\varepsilon}}

The directional derivative in a product of functions, applying the previous formula  becomes

D_\lambda (f(x_1)f(x_2))=\lambda (x_1)\lambda (x_2)+f(x_1)f(x_2)

and similarly you can get formulae for the products of n functions. The functional derivative of I(f) is a special case of the directional derivative above: the functional derivative in the direction of the delta function \lambda (x)=\delta (x-y). Please, note that this is “delicate” and “intricate” since delta functions are NOT proper functions but “distributions”. They only are meaningful when they are integrated out, just as functionals theirselves!

Definition 3. Functional derivative. The functional derivative of the functional I(f) with respect to f(y) is defined by the formal expression

(1)   \begin{equation*} \displaystyle{\boxed{\dfrac{\delta I(f)}{\delta f(y)}=\lim_{\varepsilon\rightarrow 0}\dfrac{I(f+\varepsilon \delta (x-y))-I(f)}{\varepsilon}}}\end{equation*}

Exercise (for eager readers). What are the differences between Gateaux derivatives, Fréchet derivatives and the above functional derivative? Remark: axioms are important here.

Similarly, functional derivatives of higher order can be defined in a straightforward fashion. If the functional is given by an expression

(2)   \begin{equation*} \displaystyle{I(f)=\sum_n \dfrac{1}{n!}\int dx_1\cdots dx_n F_n(x_1,\ldots,x_n)f(x_1)\cdots f(x_n)}\end{equation*}

then its functional derivative reads

(3)   \begin{equation*} \displaystyle{\dfrac{\delta I(f)}{\delta f(x)}=\sum_{n=0}^\infty \int dx_1\cdots dx_n F_{n+1} (x,x_1,\ldots,x_n)f(x_1)\cdots f(x_n)}\end{equation*}

A list of simple examples about functional derivatives:

1) If I(f)=\int_{-1}^1f(x)dx, then

\dfrac{\delta I(f)}{\delta f(y)}=1, if \vert y\vert <1.

\dfrac{\delta I(f)}{\delta f(y)}=0, if \vert y\vert > 1.

\dfrac{\delta I(f^2)}{\delta f(y)}=2f(y)\dfrac{\delta I(f)}{\delta f(y)}.

2) If I_z(f)=f(z), then

\dfrac{ \delta I(f)}{\delta f(y)}=\dfrac{\delta f(z)}{\delta f(y)}=\delta (x-y)

3) If I(f)=\int dx f(x)g(x), with g(x) a fixed function, then

\dfrac{\delta I(f)}{\delta f(y)}=\int dz \delta (z-x) g(z)=g(y)

4) If I(f)=\exp\left[\int dx f(x)g(x)\right], then

\dfrac{\delta I(f)}{\delta f(y)}=\dfrac{\delta\int fg}{\delta g(y)}\exp \int fg)=\int dz \delta (z-x)g(z)I(f)=g(y)I(f)

Some extra properties:

A) Chain rule: if I(F(f)), where I, F are functionals, then

\displaystyle{\dfrac{\delta I(f)}{\delta f(x)}=\int dy \dfrac{\delta I}{\delta F(y)}\dfrac{\delta F}{\delta f(x)}}

B) Taylor expansion in functional spaces. We can prove and write, in terms of functional derivatives, that

\displaystyle{I(f+g)=I(f)+\int dy\dfrac{\delta I}{\delta f(y)}g(y)+\dfrac{1}{2!}\int dy_0dy_1\dfrac{\delta^2I}{\delta f(y_0)\delta f(y_1)}g(y_0)g(y_1)+\ldots}

Definition 4. Gaussian functionals. Given a linear operator A, strictly positive (or hermitic with inverse operator), and f(x) a real function (complex functions are also possible as “numbers”), a Gaussian functional is a functional with the following form

(4)   \begin{equation*} \boxed{\displaystyle{G_A(f)=\exp \dfrac{1}{2}\int dx f(x)A^{-1}f(x)}}\end{equation*}

For any gaussian functional, we define the “two point” correlation function C(x_1,x_2) as follows

C(x_1,x_2)=\dfrac{\delta }{\delta f(x_1)}\dfrac{\delta}{\delta f(x_2)}G_A(f)\vert_{f=0}

Exercise (for eager minds). For a given gaussian functional G_A, compute the 2-point correlation function above C(x_1,x_2) in terms of A.

Gaussian integrals with a finite number of variables x=(x_1,...,x_n) are relatively common and simple to calculate multiple integrals:

\displaystyle{I(A,a)=\int dx_1\cdots dx_n\exp \left(-\dfrac{1}{2}\sum_{ij}x_iA_{ij}x_j+\sum_i a_ix_i\right)=\int dx \exp \left( -\dfrac{1}{2}xAx+a\cdot x\right)}

and where A is a complex, symmetric matrix with eigenvalues \lambda_i, such as \lambda_i are real with Re(\lambda_i)\geq 0 and \lambda_i\neq 0. The integral can be performed by usual procedures, and it provides

I(A,a)=(2\pi)^{n/2}(\det A)^{-1/2}\exp \left(\dfrac{1}{2}aA^{-1}a\right)

The multivariate gaussian distribution of zero mean, P(x), is defined to be

P(x)=(2\pi)^{-n/2}(\det A)^{1/2}\exp \left(-\dfrac{1}{2}xAx\right)

The mean value with respect to the gaussian distribution of any function H(x) is defined as

\langle H\rangle_P=\int dx P(x)H(x)

We define the generating function G_F of any multivariate distribution F as the function

G_F(a)=\langle \exp (a\cdot x)\rangle_F

where a is any arbitrary constant vector. For the multivariate Gaussian distribution, we have, using its definition,

\langle \exp (a\cdot x)\rangle=\exp \left(\dfrac{1}{2}aA^{-1}a\right)

Note we take the normalization to be:

\langle 1\rangle=\langle \exp (a\cdot x)\rangle\vert_{a=0}=1

The moments of (any) distribution are defined as the mean values

\langle x_k^n\rangle

and, in general, the mean values \langle x_1,\ldots,x_n\rangle.

From the generating function of the distribution, we can get the moments of the distribution. We can write the following formulae

\langle x_k\rangle=\dfrac{\partial}{\partial a_k}\langle \exp (a\cdot x)\rangle \vert_{a=0}=0

In particular, we have

C(k_1,k_2)=\langle x_1,x_2\rangle=\dfrac{\partial}{\partial a_k(1)}\dfrac{\partial}{\partial a_k(2)}\langle \exp (a\cdot x)\rangle\vert_{a=0}=(A^{-1})_{k(1)k(2)}

The last expressions are very important. The covariance of the gaussian distribution C(x_1,x_2) is given by the elements of the inverse matrix A^{-1}.

Theorem 1. Wick’s theorem. All the moments of higher order of a gaussian distribution are fully determined by the moments of order 1 and 2 (i.e., by the mean and the covariance!).

A) The moments of odd order are all zero.

B) For the moments of even order:

\displaystyle{\langle x_k(1)\cdots x_k(n)\rangle=\dfrac{\partial}{\partial a_k(1)}\cdots \dfrac{\partial}{\partial a_k(n)}\langle \exp (a\cdot x)\rangle\vert_{a=0}=\sum_{pairs}\langle x_k(p1)x_k(p2)\rangle\cdots\langle x_k(p(n-1))x_k(p(n-2))\rangle}

For example, as application of the theorem, for a gaussian distribution (with zero average/mean) we have:

\langle x_i\rangle=0

\langle x_ix_jx_k\rangle=0

\langle x_1x_2x_3x_4\rangle=\langle x_1x_2\rangle+\langle x_1x_3\rangle \langle x_2x_4\rangle+\langle x_1x_4\rangle \langle x_2x_4\rangle

The value of the gaussian integral was written to be

I(A,a)_n)=(2\pi)^{n/2}(\det A)^{-1/2}\exp \left(\dfrac{1}{2}aA^{-1}a\right)

In the limit of “infinite variables” (dimensions), we have

\displaystyle{\lim_{n\rightarrow \infty} I(A,a)_n\sim \lim_{n\rightarrow \infty} (2\pi)^{n/2}(\det A)^{-1/2}\exp \left(\dfrac{1}{2}aA^{-1}a\right)\rightarrow \infty}

We can consider the following ratios in order to get a finite result (remember that an infinite result has NO physical meaning!):

1) \dfrac{ I(A,a)_n}{I(A,b)_n}=\exp \left(\dfrac{1}{2}aA^{-1}a-\dfrac{1}{2}bA^{-1}b\right)

2) \dfrac{I(A,a)_n}{I(A,0)_n}=\exp\left(\dfrac{1}{2}aA^{-1}a\right)

3) \dfrac{ I(A,a)_n}{I(B,a)_n}=\left(\det AB^{-1}\right)^{-1/2}\exp \left(\dfrac{1}{2}a(A^{-1}-B^{-1})a\right)

A common feature of every “regularization” above is that they do NOT depend explicitly on the dimension “n”.They can be useful considering “limits” of expressions when n\rightarrow \infty.

Exercise (1). Define the determinant of an operator in formal way. Hint: consider it as an infinite matrix. The determinant of a matrix is the product of its eigenvalues.

Exercise (2). Define the inverse operator G=A^{-1} of A. Hint: from the inverse of a (infinite) matrix with AA^{-1}=1, take AG(x,y)=\delta (x-y) and solve it for G.

Exercise (3). Compute the action S of classical mechanics as a functional of the path q(t) for a free particle and for a particle interacting with a potential V(x,t).

Exercise (4). Compute the action functional for the free particle, the harmonic oscillator potential and the Kepler/Newton-like potential.

See  you in my next Path Integral TSOR post!!!!

Liked it? Take a second to support amarashiki on Patreon!
Become a patron at Patreon!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.