Classical Uncertainty Relationship

# Classical Uncertainty Relationship

TL;DR – Classical Hamiltonian mechanics already includes an uncertainty relationship that is similar to Heisenberg’s uncertainty principle of quantum mechanics.

In previous posts we have looked at information entropy, the number of yes/no questions you need to identify an element within a distribution, and the fact that Hamiltonian dynamics conserves that. Here we will show one interesting result: since information entropy cannot be changed, Hamiltonian mechanics cannot take a distribution and squeeze it arbitrarily. Since we always have to use the same number of yes/no questions to identify an element, there is a minimum amount of uncertainty always associated to each element.

Intuitively, if you have a distribution $\rho(x,p)$ over phase space and you shrink it, the information will diminish: you will need less information to identify an element. That type of transformation cannot be done through Hamiltonian evolution since it conserves information entropy. We are going to ask this question: given a fixed amount $I_0$ of information entropy, what is the distribution $\rho(x,p)$ that minimizes the variance? We will find that it is the Gaussian distribution, and therefore in general the product $\sigma_x\sigma_p$ of the standard deviations is related to the information entropy by the following relationship:

\begin{equation}
\sigma_x\sigma_p \geq \exp (I_0 – 1) / 2 \pi
\label{classicalUncertainty}
\end{equation}

Note the similarity with the Heisenberg uncertainty principle of quantum mechanics:

\begin{equation}
\sigma_x\sigma_p \geq \hbar / 2
\label{quantumUncertainty}
\end{equation}

Let’s see how it works.

1. Minimizing uncertainty

Suppose we have a phase space distribution $\rho(x,p)$. As the state evolves into its final state, the amount of material remains the same: $\int \rho \, dx \wedge dp = 1$. The final entropy will also remain the same: $I(\rho) = – \int \rho \ln \rho \, dx \wedge dp = I_0$. We now ask the following question: once we fix the entropy, what is the distribution that minimizes the product $\sigma_x\sigma_p$ of the standard deviations? That would correspond to the minimum uncertainty for the final distribution given by Hamiltonian evolution on the initial distribution.

We are going to use Lagrange multipliers, which is a standard technique in calculus of variations. We write a function $M$ that is the product of the variances $\sigma_x^2 \sigma_p^2 \equiv \int (x-\mu_x)^2 \rho \, dxdp \int (p-\mu_p)^2 \rho \, dxdp$ summed with the two constraints (the total amount of material is $1$ and the total entropy $I_0$) each multiplied by a constant. Then take the derivative and set it to $0$.
\begin{align*}
M = &\sigma_x^2 \sigma_p^2 + \lambda_1\left(\int \rho dxdp – 1\right) + \lambda_2\left(- \int \rho \ln \rho \, dxdp – I_0\right)\\
\delta M = & \delta \sigma_x^2 \sigma_p^2 + \sigma_x^2 \delta \sigma_p^2 + \lambda_1 \, \delta \int \rho dxdp – \lambda_2 \, \delta \int \rho \ln \rho \, dxdp \\
= &\int (x-\mu_x)^2 \delta \rho \, dxdp \, \sigma_p^2 + \sigma_x^2 \int (p-\mu_p)^2 \delta \rho \, dxdp + \\ &\lambda_1 \int \delta\rho \, dxdp – \lambda_2 \int (\delta \rho \ln \rho + \rho \delta \ln \rho)dxdp \\
= &\int [(x-\mu_x)^2 \sigma_p^2 + \sigma_x^2 (p-\mu_p)^2 + \lambda_1 – \lambda_2 \ln \rho – \lambda_2 ] \delta \rho \, dxdp = 0 \\
\lambda_2 \ln \rho = &\lambda_1 – \lambda_2 + (x-\mu_x)^2 \sigma_p^2 + \sigma_x^2 (p-\mu_p)^2 \\
\rho = &e^{\frac{\lambda_1 – \lambda_2}{\lambda_2}}e^{\frac{(x-\mu_x)^2 \sigma_p^2}{\lambda_2}}e^{\frac{\sigma_x^2 (p-\mu_p)^2}{\lambda_2}}
\end{align*}
Now that we found the form of the function, we solve the multipliers. That is: we find $\lambda_1$ and $\lambda_2$ such that the distribution is unitary and the information entropy is $I_0$. We have:
\begin{align*}
\rho = &\frac{1}{ 2 \pi \sigma_x \sigma_p} e^{-\frac{(x-\mu_x)^2}{2\sigma_x^2}} e^{-\frac{(p-\mu_p)^2}{2\sigma_p^2}} \\
I_0 = &\ln (2\pi\sigma_x\sigma_p) + 1
\end{align*}
We find that the distribution that minimizes the spread is the product of two independent Gaussians. Recall that in quantum mechanics, the Gaussian wave packet is the state that minimizes uncertainty: this is the classical analogue.

As the distribution evolves, the entropy is conserved and therefore the product $\sigma_x^2 \sigma_p^2$ can never be less than the one given by the Gaussian distribution of the same entropy. We have:
\begin{align*}
\sigma_x\sigma_p \geq \exp (I_0 – 1) / 2 \pi
\end{align*}
This is strikingly similar to the Heisenberg uncertainty principle, except that the value depends on the initial entropy $I_0$.

2. Conclusion

We have found that also classical Hamiltonian mechanics satisfies an uncertainty relationship. Is this a coincidence? Or is there something more?

This is my take: the difference between the classical case and the quantum case is that the uncertainty is set by the initial conditions in the first while it is universal in the second. Now, suppose that each element of the classical distribution was subjected to a universal but totally chaotic force. Then we would not be able to create initial conditions with an entropy lower than the chaotic force would allow. For the same reason, we also would not be able to track infinitely precisely position and momentum of each element of the distribution given that it keeps jiggling due to the chaotic force.

If the entropy associated to the initial distribution is large compared to the entropy associated with chaotic force, then we can disregard it and classical mechanics holds. If not, then the classical description starts to break down.

In this picture, classical and quantum mechanics are not so different after all: they both describe the evolution of distributions that conserve information entropy.