|San José State University|
& Tornado Alley
Interpretation of Quantum Theory
A key element of the Copenhagen Interpretation (CI) of Quantum Theory is that particles generally do not have a material existence, but exist only as probability distribution unless they are subject to observation. This unintuitive notion leads to other even more unintuitive notion of the physical world,
Consider a particle following a periodic path that requires a time T to complete. The probability of finding it in a path interval Δs is proportional to the interval of time Δt it spends in that path interval. The relationship is
where v(s) is the velocity of the particle at point s of the path.
Thus the probability density at a point s of the path is given by
This is the time-spent probability density distribution for the particle's location. It is in the nature of the dynamic appearance of the particle in motion.
There is also a time-spent probability density distribution for the particle's velocity derived from
where a is the acceleration of the particle.
The time-spent probability distributions have mean values and variances like any other probability distributions.
Consider a particle of mass m moving in a one dimensional space x subject to a potential V(x). The energy E of the particle is given by:
The quantity (E−V(x)) can be represented as K(x), the kinetic energy of the particle expressed in terms of its location.
The probability density is thus inversely proportional to K(x)½.
The acceleration of the particle is given by
This is just a manifestation of
Thus time-spent probability density distributions exist for the location and speed of a material particle moving in a periodic path.
Niels Bohr nearly a century ago observed that classical analysis for many areas of physics had been empirically verified. Therefore, for any quantum mechanical analysis to be valid its appropriate extension to the realm of classical analysis should agree with classical analysis. This is called the Correspondence Principle. In atomic physics the extension is in terms of scale and/or the level of energy. What comes out of the solution to the time-independent Schrödinger equation for one dimensional systems is that with higher energy there are more and more rapid (dense) fluctuations in probability density. Here is an example of the probability density function from the solution of the time-independent Schrödinger equation for a harmonic oscillator.
When these probability densities are averaged over time (and/or space) the result should be the time-spent probability density function of classical analysis. This is a modified version of the conventional Correspondence Principle in that it involves time-averaging as well as finding the limit as energy increases without bound. Classical means time-averaging as well as being macro with respect to scale and energy. The limit of the quantum theoretic solution is necessarily a probability density function. Classical analysis is deterministic but there is the proportion of the time the system spends in its various allowable states that makes sense as a probability density function.
The correspondence can work both ways. The time or spatially averaged quantum theoric probability distribution going to the time-spent probability density distribution corresponds to the quantum theoretic probability distribution being in the nature of a time-spent probability distribution. This means that the speed of the quantum mechanical particle is inversely proportional to its probability density. This means that the motion of a quantum level particle is a matter of slow-slow-fast movement. Allowable states\are not so much fixed positions as regions through which the particle moves relatively slowly. There are no instantaneous jumps between quantum states but instead regions of relatively rapid movement.
The notion of a particle not having a continuous material existence stemmed from the Uncertainty Principle; i.e.,
where σx is the uncertainty of the particle's location and
σp is the uncertainty of the particle's momentum (as determined
by the particle's velocity.)
h is Planck's constant divided by 2π.
The belief was that the scale of an atom was so limited that the uncertainty of an electron's location had to be small and therefore the uncertainty of its speed so large that an electron could not be said to have a definite trajectory of a material object. This was Heisenberg's contribution to quantum theory.
To counter this notion consider the time-spent distributions for a harmonic oscillator. It is shown that the time-spent probability distributions for a harmonic oscillator satisfy the Uncertainty Principle.
As noted above a harmonic oscillator is system in which the restoring force on the particle is proportion to its displacement from equilibrium.
Let φn(ζ) be the wave function for a harmonic oscillator with principal quantum number n. The variable ζ is dimensionless and is defined as x/σ, where x is the displacement and σ is natural unit of length for a harmonic oscillator with mass m and stiffness coefficient k. It is defined by
The squared wave functions for a harmonic oscillator with principal quantum number n are given by the formula
where Hn(ζ) is the Hermite polynomial of order n.
The probability density function in terms of the displacement x is then given by
The classical probability density function is given by
where xm is the maximum displacement for the oscillator and its value is given by
where E is the total energy of the oscillator. The quantum mechanical
value for E is
hω(n + ½). Since σ² = hω/k,
this means that
The units of the dimensions can be chosen so that
and the values of m and k likewise may equal 1. This means that ω=1 and likewise σ=1.
The energy of the oscillator is then equal to (n+½). This makes the
maximum displacement for the classical oscillator equal to (2n+1)½.
When n is equal to 4 the two probability density functions are as shown below.
There are singularities at ±xm for the classical oscillator, which in the above case are at ±√5.
The ratios of the corresponding probability densities are shown below.
This ratio would be sensitive to any errors involved in the computation of the probability densities, particularly systematic ones. However since both functions are probability distributions the areas under the curves must equal unity. The computation of the area involves an approximation but the curves utilized in computing the ratios essentially satisfy that requirement. Since the quantum mechanical probability extends outside of the range allowed for the classical oscillator the quantum mechanical probability densities are necessary less than the classical harmonic oscillator values.
Thus for case of n=4 the relationship between the spatial average of the quantum mechanical probability densities and the classical ones is not close although the general shapes match for low displacements.
The correspondence is much closer for the case of n=60 which is shown below.
The correspondence at either end of the range appears not to be close because of the singularities there for the classical case. However in the above graphs the spatial averages for the lobes of the quantum probability density distributions are plotted as red dots. These red dots fall almost exactly on the curve for the classical probability density distribution. The averages for the quantum probability density function do not have to match the singularities of the classical function. It is the classical function whose values must match the quantum function values at isolated points near the end of the range of the oscillation.
However it is known from the previous study that the spatial average of the quantum mechanical probability density near zero displacement is asymptotically equal to the classical probability density at that location. It is quite likely that as the principal quantum number n and hence energy increase without bound the spatial averages of the quantum mechanical probability densities go to the classical values. This would be as would be expected for generally as the scale of system increase the closer quantities approach the classical values. This means that the quantum mechanical probability density functions do not represent some pure indeterminacy of the particle as in the Copenhagen Interpretation but instead the proportion of the time a moving particle spends near the various points.
The absence of material existence for particles was initially simply an interpretation of quantum theory; i.e., the Copenhagen Interpretation. Bell's Theorem offered the possibility of empirical verification of that interpretation. The testing using photons seemed to confirm it, but photons are quite different in their nature from electrons, protons and neutrons. Photons do not have a material presence. We know from the studies of solitons and solitary waves that a wave may have particleness as well as being a wave.
So the experimental results concerning the testing of Bell's Theorem must be carefully examined to see if there is an alternate interpretation from the conventional one. If some factor important in the physical world has been left out of the derivation then it would not be surprising that a physical experiment does not give results based on a derivation that leaves out that factor.
The predictions of the assumption that the characteristics of a pair of particles are determined at the time of the formation of the pair would have been confirmed innumerable times. These confirmations of this theory were dismissed and its place taken by the implausible apparatus of the Copenhagen Interpretation and the Bell Inequality with the communication between paired particles taking place at speeds faster than that of light, even at infinite rates. If such an alternative is considered acceptable then there should not be much difficulty in coming up with an alternate explanation for any experimental violations of Bell's Inequality.
The dilemma is that there are two apparent discrepancies between theories and empirical measurements. One is based upon Bell's Theorem is as shown below.
The classical case shown corresponds to material particles having a continual material presence with the characteristic of a pair of particles fixed at the time of the formation of the pair. The quantum case shown is for a pair of particles not to have a material presence until a characteristic of one of the particles is measured. There is a remarkably small difference. On the other hand there is the very great difference between the implied speed of communication between the two paired particles and the evidence supporting the Special Theory of Relativity. The maximum speed of communication classically is the speed of light. And there is communication between the paired particles through their gravitational and electromagnetic fields and this constitutes something in the nature of measurement. The speed of communication implied by the Copenhagen Interpretation has no upper bound.
Which seems more likely to resolve the discrepancy between theory and measurement? On the one hand there could be an expansion of the Bell Theorem analysis to take into account such factors as the gradients in the ambient electrical and magnetic field in the vicinity of the measuring apparatus. It was after all the gradient in a magnetic field the Stern-Gerlach experiment in 1922 that produced the first evidence of atomic spin.
On the other hand there could be the search for some mechanism that would account for communication between paired particles at speeds greater than the speed of light and perhaps at infinite rates. There seems to have been no attempt to pursue this later goal. That perhaps is a manifestation of the exasperating true-believer-hood of those who believe in the Copenhagen Interpretation.
What has been of concern among physicists dealing with experimental violation of the Bell Inequality of the Bell Theorem is the existence of loopholes in the interpretation of experimental results. One loophole, called the freedom of choice loophole, has to do with the misalignment of the measuring instruments with the line of travel of the particles. Another loophole is called the superdeterministic Universe loophole. This is the notion that everything in the Universe has been predetermined and the is no freedom in the outcome of experiments.
If the instruments are aligned then the correlation between the readings for the particles and their partners is 100 percent. If the instruments are anti-aligned the correlation is also 100 percent. If the instruments are aligned perpendicular to the line of travel of the particles the correlation has a zero expected value. It is only when the instruments are misaligned that the correlation can give a result at variance with the classical model.
Instead of the problem of metaphysical loopholes the thing that will allow an alternative explanation of results of the Bell Theorem experiments is finding something important that was neglected in the derivation of the Bell Theorem.
Here is a phenomenon that the Bell Theorem analysis does not take into account. Charged particles moving in parallel are subject to transverse oscillation as each particle is repelled by its nearest neighbor. They oscillate back and forth as the cross section of the beam expands. The angle at which the particle impacts a measuring instrument may quite different from what a particle subject to no transverse oscillation would.
In a recent conference of quantum physicists a survey was done on what interpretation of quantum theory the physicists favored. Only 42 percent of the respondents said the Copenhagen Interpretation. This was more than any other interpretation so the Copenhagen Interpretation is considered the dominant interpretation in physics. Nevertheless the survey indicated that 58 percent found the evidence against the Copenhagen Interpretation more compelling than the evidence for it.
HOME PAGE OF Thayer Watkins,