﻿ The Nature and History of the So-Called Renormalization Group
San José State University

applet-magic.com
Thayer Watkins
Silicon Valley
USA

The Nature and History of the
So-Called Renormalization Group

## Background

• Perturbative Technique:

This is an approach to analyzing difficult problems. It starts with a problem with a known solution. The unsolved problem is represented as being equal to the solved problems plus a series of successive perturbations weighted by powers of an expansion parameter ε; i.e.,

#### H = H0 + εΔ1 + ε²Δ2 + ε³Δ3 + …

The solution to the unsolved problem is the represented as

#### X = limε→1(X0 + εX1 + ε²X2 ε³X3 + …)

This iterative approach of adding successive corrections to the solution to solvable problem was useful in obtaining practical solutions to difficult problems.

• Quantum Field Theory:

Up until the 1940's physical analysis concerning particles was carried out in terms of point particles. In the 1940's physicists began to represent an electron in terms of its associated electrical field. Then photons were vibrations in such electrical fields. This was called quantum field theory (QFT).

QFT applied to the interactions of electrons and photons was called quantum electrodynamics (QED). A peculiar phenomenon developed in applying the perturbative techinique to QED. For some quantities when the first correction was applied it gave results that matched expermental measurements to a very close degree. However the second and higher order corrections were infinite.

More generally the computation of some quantities in QED was yielding infinities in more than one place. Physicists began trying to cancel out the infinities. This process became known as renormalization in analogy with the process of computing probabilities called normalization. Probabilities need to add up to unity. So a sum of the tentative probabilitie is computed and each tentative probabilty is divided by that sum.

• Phase Transition Phenomena
and their Critical Temperatures:

1. Melting of a solid into a liquid; freezing a liquid into a solid.
2. Boiling of a liquid into a vapor; condensing of a vapor into a liquid.
3. Loss of magnetization
4. Loss of transparency, onset of opalescense
5. Loss of superconductivity
6. Turbulence

• Mathematical Groups and Semigroups:

A mathematical group is a binary function f( , ) defined over a set of elements S such that

1. the function is associative: f(a, f(b, c))=f(f(a,b), c) for all a, b and c in S.
2. there exists an identity element e in S such that f(e, a) = a for all a in S.
3. there exist an inverse b for each a in S; i.e. f(b, a)= e.

A semigroup only has associativity. It lacks an identity element and inverses. It would be easy to add an identity element but inverses are a different matter. A semigroup with an identity is called a monoid.

Transformations, such as linear transformations of spaces, are easily interpreted as elements of a semigroup and possibly groups. Concatenations of transformations are associative. A do-nothing transformation serves as a right and left identity.

• Ising Models in Statistical Mechanics:

The first Ising model was formulated by Wilhelm Lenz in 1920 to analyze ferromagnetism. It postulated that the magnetic interactions at the atomic level involved only nearest neighbors. The strength of the nearest neighbor interactions was assumed to be inversely proportional to absolute temperature. He gave this model to his student Ernst Ising for his dissertation topic. Ising solved the model explicitly for the one dimensional case in 1924. That solution did not involve there being a critical temperature above which magnetization is lost. That suggested that Ising models were not appropriate for investigating critical temperature phenomena.

However in 1944 Lars Onsager got an explicit solution to the two dimensional Ising model and that solution did involve a critical temperature for the loss of magnetization. That sparked great interest in Ising models and their application to phenomena other than ferromagnetization.

Unfortunately the three dimensional Ising model has not yet been solved explicitly. However there has been much work on it using Monte Carlo simulation methods.

• Publication History:

The first publication on the so-called renormalization group and the one that coined its name was by Ernst Carl GerlachStueckelberg and André Petermann in Helvetica Physica Acta in 1951. About the same time Murray Gell-Mann and Francis Low were working on the same topics but they did not use the term renormalization group. They did not publish their work until 1954. It was published in the September, 1954 issue of Physical Review under the title, "Quantum Electrodynamics at Small Distances," (pp.1300-1312). This was a widely read article. One reason they would not have used the term renormalization group was because mathematically the structure they used was not a group; it was only a semigroup. It lacked an identity element and inverses.

In 1959 in the Soviet Union Nikolay Bogolyubov and Dimitry Shirkov published a book which involved the application of the renormalization group to QED.

In 1966 Leo P. Kadanoff introduces the concept of block spin to bring the notion of transformation of scale.

Kenneth G. Wilson
The topic of the so-called renormalization group came to full fruition in the two articles published by Kenneth Geddes Wilson in Physical Review (B) in November of 1971. Both were entitled, "Renormalization Group and Critical Phenomenon." Part I was subtitled, "Renormalization Group and the Kadanoff Scaling Picture." Part II was subtitled, "Phase-Space Cell Analysis of Critical Behavior."

Wilson went on to apply his analysis to numerous fields such as the Kondo Problem in 1975 (Review of Modern Physics, October, 1975)

Wilson received the Nobel Prize in Physics for 1982. In the Nobel award he was given credit for having solved one of the three problems in physics where the experimental facts were known but the theoretical explanation was missing for a long period. Those three were superconductivity, critical phenomena and turbulence. Superconductivity had been solved more than ten years before. Wilson's work solved the problem of critical phenomena. Turbulence is yet to be solved.

In his very excelllent popular article published in Scientific American in August of 1979 ("Problems in Physics with Many Scales of Length" pp.158-179) Wilson considers a two dimensional lattice in which nearest neighbors interact with a strength given by a coupling constant K inversely proportional to absolute temperature. He wanted to construct an algorithm that would be applicable to the more general case. Since the Onsager solution was available for the two dimensional case he had a check on the results of any algorithm he applied.

The algorithm consisted of looking at the lattice as being made up of spin blocks of four atoms and then considering spin blocks made up of four of the smaller spin blocks. If the scale of the smaller spin block squares is L the it is 2L for the spin blocks of spin blocks. If the interaction parameter, the coupling constant, for the atoms is K then for the spin blocks it is K². For spin blocks of spin blocks it would be (K²)²=K4. If K<1 then it doesn't take many shifts in scale to exhaust the extent of the spatial linkage.

However if K=1 the linkages extend to all scale levels. The temperature corresponding to K=1 would be the critical temperature. Thus the notion of a critical temperature very naturally enters the picture with notion of changes of scale.

The properties of a system are not just an all-or-nothing function of temperature and a critical temperature. The properties depend upon the relative deviation of the temperature from the critical temperature; i.e.,

#### t = (T−TC)/TC

Wilson calls t the reduced temperature.

The system properties take the form of scaling laws;i.e., M=|t|α.

Here are the values of the exponents for three properties as given by the mean field theory of Landau and the values computed from the Onsager's exact solution of the two dimensional Ising model,

Comparison of Scaling Exponeents
Property Exponent
Mean Field
Exponent
Onsager Ising
Magnetization 1/21/8
Magnetic
Susceptibility
−1−7/4
Correlation
Length
−1/2−1

Thus it is shown that mean field theory is not correct for two dimensional Ising models. Elsewhere it is shown that mean field theory is correct for Ising models of dimensions four or greater.

• ## Part I of Wilson's 1971 Article

In Part I of his 1971 pair of articles Wilson examines Kadanoff's notion of spin blocks in detail. This starts with a statistical mechanics partition function Z of roughly the following form.

#### Z(K, h) = Σiexp(KΣΣsisj + hΣsi)

where K is the coupling constant divided by kBT, the product of Boltzmann's constant times absolute temperature. The electron spin at site i is denoted as si. The parameter h is a coefficient for magnetization which is proportional to the intensity of the external magnetic field divided by kBT. The first summation on the RHS is over all of the sites. The second summation is over all nearest neighbors and the last summation is again over all sites.

In Kadanoff's model si can only take on the values of ±1; in Wilson's model si can take on any real value.

The crucial variable for Wilson's analysis is the mean free energy over all space, defined as

#### F(K, h) = limV→∞ln(Z(K, h))

This free energy is E−TS, where E is total energy, S is entropy and T is absolute temperature. In effect it is the energy available for doing work; that is to say the energy less the thermal energy.

## Wilson's modeling

Wilson then considers an infinite cubic lattice divided into cubic blocks L lattice sites on each side. Each block contains L³ sites.

If the transformation of the scale of the analysis leaves the results of the analysis unaffected then it must be that:

#### F(K, h) = (1/L³)F(KL, hL)

Likewise if the correlation length ξ must transform with scale as

#### ξ(K, h) = Lξ(KL, hL)

Kadanoff envisioned L to have only integral values; Wilson allows L to have any positive real vaue so he can derive differential equations.

Kadanoff found the dependence of KL and hL on L to be of the forms

#### KL = KC − εLy hL = hLx where ε = KC − K KC is a critical value and x and y are constants

These are called the Widom-Kadanoff scaling laws. Wilson found these relationships also arise from his approach as well as Kadanoff's. They can be expressed more succinctly as

#### KC − KL = (KC − K)Ly hL = hLx

Wilson then asserts but does not explain that L(dKL/dL) depends upon KL and hL but not on L directly; i.e.,

#### L(dKL/dL) = u(KL, (hL)²) and hence (dKL/dL) = (1/L)u(KL, (hL)²)

where u(_, _) is a function to be determined. The dependency is on h² because the Hamiltonian for the system is unchanged by h→(−h).

Wilson likewise asserts that

#### L(dhL/dL) = hLv(KL, (hL)²) and hence (dhL/dL) = (1/L)hLv(KL, (hL)²)

where v(_, _) is a function to be determined.

Wilson then differentiates the equation

#### F(K, h) = (1/L³)F(KL, hL)

with respect to L to get

#### 0 = −3(1/L4)F(KL, hL) + (1/L³)[ (∂F(KL, hL)/∂KL)(dKL/dL)] + (1/L³)[ (∂F(KL, hL)/∂hL)(dhL/dL)] which by factoring out (1/L³) from each term yields 0 = (1/L³)[−(3/L)F(KL, hL) + (∂F(KL, hL)/∂KL)(dKLdL) + (∂F(KL, hL)/∂hL)(dhL/dL)]

By a similar procedure the differentiation of ξ(K, h) = Lξ(KL, hL) yields

#### 0 = L[ ξ(KL, hL)/L + ∂ξ(KL, hL)/∂KL)(dKLdL) + (∂ξ(KL, hL)/∂hL)(dhL/dL)]

This is a set of two equations in the two unknowns (dKLdL) and (dhL/dL). The factor of (1/L³) in the first equation and the factor of L in the second equation can be eliminated. The values of (dKLdL) and (dhL/dL) may be determined.

But it was previously asserted that

#### (dKLdL) = (1/L)u(KL, (ξL)²) and (dhL/dL) = (hL/L)v(KL, (ξL)²)

Thus explicit expressions for u(_, _) and v(_, _) are obtained. The expressions found are such that they are functions of KLand (hL)² but not directly functions of L.

Then the equations

#### (dKL/dL) = (1/L)u(KL, (hL)²) (dhL/dL) = (1/L)hLv(KL, (hL)²)

with the u(_, _) and v(_, _) which are derived from the previous set of equations may be labeled the differential equations for the renormalizatiion group.

For further analysis Wilson desired that u(KL, (hL)²) and v(KL, (hL)²) be analytic at the critical point. This means their values may be expressed as a Taylor's series of the deviations their arguments from the critical point. The problem is that u(KL, (hL)²) and v(KL, (hL)²) are functions of F(K, h) and ξ(K, h) both of which have some sort of singularity at the critical point.

Wilson then says

The Kadanoff block hypothesis suggests that u(K, h²) and v(K, h²) will indeed be analytic at the critical point.

## Critical Points

For values of L much less than the correlation length

#### dKL/dL ≅ (1/L)(KL−KC)y dhL/dL ≅ (1/L)hLx where x = v(KC, 0) and y = ∂u(K, h²)/∂K at K=KL and h=0

Wilson then assumes that the above approximations are valid from KL=KC down to KL=KC/2. The scaling law

#### KC − KL = εLywhere ε=KC − K

implies that for KL=KC/2

#### KC/2 = εLyand hence L = (KC/2ε)1/y

In this formula ε is an unknown quantity.

The value of hL at the point where KL=KC/2 is given by the scaling law

Since

#### F(K, h) = (1/L3)F(KL, hL) and ξ(K, h) = Lξ(KL, hL)

the previously found values for L, KL and hL) may be substituted into the above formulas to give F(K, h) and ξ(K, h) as functions of the single unknown hεx/y.

(To be continued.)

In Part II Wilson writes

No justification has been found for the Kadanoff picture. […] Because there has been no justification for the Kadanoff picture, it has been impossible to calculate specific exponents within the Kadanoff picture; the best one can do is to derive the scaling laws which relate all the critical exponents to the two unknown parameters.

By this Wilson means the specific assumptions Kadanoff makes, such as that spin equals ±1, found no justification.

• ## Part II of Wilson's 1971 Article

Mathematics possesses not only truth, but supreme beauty-- a beauty cold and austere, like that of sculpture, without appeal to any part of our weaker nature… sublimely pure, and capable of a stern perfection such as only the greatest art can show.

In Part II Wilson develops a differential equation approach to the problem of critical values. It is ironic that all of the practitioners of renormalization group methodology, except Gellman and Low, use the term renormalization group even though they know full well that no group per se is involved.

Steven Weinberg in his article, "Why the Renormalization Group is a Good Thing," says,

Ken Wilson, perhaps alone of all theoretical physicists, was well aware of the importance of using the renormalization group ideas of Gell-Mann and Low through the late 1960s and early 1970s.

However about the same time in the field of pure mathematics Bernard Mandelbrot was developing the notion of mathematically self-similar structures. He called them fractals because usually they were of fractional dimensions. Mathematical self-similarity means a subset of an object contains all of the structure of the whole set.

In physics a change of scale may produce something closely akin to the original. For example, suppose one takes a picture of a fluid in turbulence and then selects a small portion of that picture and enlarges it. In turbulence that enlarged picture looks essentially the same as the original. There are whirls within whirls within whirls ad infinitum .

• Wilson's Partial Differential
Equation Analysis:

In Part II Wilson generalizes the Ising model such that the partition function takes the form:

#### Z(K, h; r, λ) = Π ∫−∞∞exp(KΣΣ sisj + hΣ si − r Σsi2 − λΣsi4)dsi

where KΣΣ sisj denotes nearest neighbor interactions.

In order to get partial differential equations Wilson needed to make all variables continuous. That meant that spin, instead of being ±1, could take on any real value from −∞ to +∞.

Wilson states that he uses the Landau-Ginzberg form for interactions; i.e., the Hamiltonian is of the form

#### HL = −½KL∫x∇sL(x)2 − ∫xPL(sL(x))

where KL is a constant. The variable x is a vector of dimension d and thus ∫x means ∫ddx. ∇sL(x)2 of course means the gradient of spin squared. PL(sL(x) is a function which may change with L. It effectively introduces an infinite number of variables into the analysis which are irrelevant.