Sometimes something seems all mysterious and astounding. Then you learn how it works, and it’s really disappointing. For me, magic tricks often have been that way. Well, this past year I learned how machine learning works, and that was interesting, but somewhat unsatisfying, but fine. Quantum mechanics had been on my list for years, and I decided to learn more about it, now I wonder why people describe it in such odd ways when it needn’t be, such that I can’t think of a time where learning about science has made me angry.

# My (Dis)Qualifications

Firstly, while I’ve been working in applied computer science since 1995, I’m not a QM scientist. My scientific reading is far and wide, so while I’m far from the polymaths of old, I guess I’m polymath-ish1. But over the years, it’s happened a few times where I had a non-orthodox opinion on some scientific conundrum or other, and later proved right. Most recently the recent post Evidence that the key assumption made in discovery of dark energy is in error agreed with something I’d thought for some time, but I’m a nobody in the field, so who’d listen to me? I wouldn’t have. So am I right about what I’ve written below? I give it probably 1%, but it all fits and makes sense with what I know, so at worst, I’ll benefit from Cunningham’s law. At best, I’ll be known as the quantum mechanical Einstein – the former being way more likely. It’s possible that all I’ve written about has already been known, and I’m late to the bandwagon, but in any case, it’s been an interesting learning experience so far.

Also, I assume you’re familiar with the ERP paradox and other QM things, so if you’re not, I don’t know how much this will help you understand it.

Lastly, any misrepresentations of opinions, interpretations, or facts is by accident. This isn’t my field. I’m decent at math, but math errors are definitely a possibility.

# The Problem

I’ve always heard quantum mechanics (QM) described in what’s called the orthodox or Copenhagen interpretation. It always seemed weird and mysterious. Terms like superposition, non-locality, and spooky action at a distance. A friend leant me the book Quantum Reality which I read. And as I’m reading and taking in the eight different interpretations of what reality could be, I was scratching my head thinking “why make this all so complicated?” Observer created realities, measuring devices being special things, many worlds, particles taking all possible paths, multiverses, and so on. Pilot wave theory seemed to be so much simpler on the mind than the others and seemed to explain everything.

Why was I befuddled about all of this? Well you look at pilot wave
theory (PWT), and it doesn’t require anything *that* weird (many
worlds and so on). It
obeys one of my personal laws (maybe someone else has coined a
term): “When you see unexpectedly complex behavior, assume more
than one thing” – that is, in PWT, you separately have a wave and a
particle, rather than a probability cloud with complex behavior. The only fault that PWT had, according to the book, was
that for non-locality, it needed a super-luminal wave. Now is PWT
the way things actually are as it is? I’ll give it a very
qualified maybe, but it definitely hurts the brain less and has
fewer special pleadings – and definitely no *weird* ones.

Given the eight alternatives, being mathematically equal, my logic
was Occam’s razor seems to *directly* apply. It applies because all other
things *are* equal, mathematically speaking, and therefore, experimentally
speaking. PWT fits best as the simpler solution.

Occam’s razor says that when presented with competing hypotheses that make the same predictions, one should select the solution with the fewest assumptions, Wikipedia, 29 January 2020

With PWT, “superposition” is just a particle in a state you don’t
know. That is, particles have definite state when not observed.
Schrödinger’s cat **is** either definitely dead or definitely
alive in the box even before you open it.

But what about the super-luminal wave? That’s definitely not an
awesome thing. That said, super-luminal is really only another way
to spell non-locality. If you truly have non-locality, you have
*something* super-luminal going on, or time travel. So
non-locality and superluminal effect is still an issue. Good news is,
there does appear to be something we can do about it.

# The End Of Non-locality and Bell’s Inequality

## The ERP Experiment

What I believe to be the origin of the argument for non-locality is what’s described as the ERP Paradox. It involves quantum entaglement of two particles. One formulation is that you have something that fires two entangled photons, one red, one green in opposite directions with an unknown polarization. These particles will then each hit a polarizing crystal with two different detectors behind them. If the crystals are oriented the same (up, down, left, right, 22.5 degrees, etc.), the particles will then choose the same path as each other 100% of the time, either both toward one detector, or to the other.

Under the Copenhagen interpretation, this is interpereted that measuring one in effect alters the other one to agree, as each of the particles has no definite polarization/spin/etc. until the wave function collapses at the time of measurement. So how does the first particle know how to collapse its waveform to match the other? This is the spooky action at a distance, or non-locality.

The other argument for non-locality is when the polarizing crystals are offset from each other, the thought was that the probability of different particle detectors being hit should be linear (the reddish-purple line) with the difference in angle (ϴ). It turns out it’s not, but rather cosine-squared (the darker purple line) of theta as below.

N.B. the Y axis is probability that they choose the same path, X axis is orientation in radians.

According to my understanding, this graph is different than the predictions of local hidden variable theories, and is generally used as proof of non-locality.

This is where I said: “stop the bus.”

# My Objections

I’m a big fan of epistemology, which is basically: how do you know what you know? And so when arguing things, I like to start there first, because if you find an epistemelogical error, the rest of the argument comes down. When doing incident investigations, I’ll often put statements akin to “what do we think we know and why do we believe it” in the working document, because during the fire fight, much is unclear. So starting with that, I had a series of questions after having read the thought experiment:

- If I fired single photons of known polarization through a polarizing crystal at varying angles, would one get the same curve?
- Is polarization only a single variable, or is there a “hidden variable” to it that has a simple model to it that would explain it?
- Would the shape of the waveform matter? Maybe it’s not sinusoidal as I’d naively assume, and perhaps it matters.
- Why should this be expected to be linear (in the y=mx+b sense) in the first place? Maybe it’s because I had recently been dealing in highly non-linear functions of machine learning.
- Would wavelength matter? Given as stated where I read it, they were two different colors: red and green.
- Do we actually understand how polarization actually works in the first place? Not how it behaves, which we seem to know quite a lot about, but what’s actually happening?

Spoiler: I don’t answer all of these.

The simplest question was: what about the single photon case? If you fire single photons of known polarization through a polarizing crystal at various values of ϴ, do you get a linear extinction curve, cos ϴ, or even cos²ϴ, or something else? It turns out that you get…drumroll… cos²ϴ. It’s a known phenomena. It even has a name: Malus’s law. Well that kinda blows most of the non-locality out of the water all on its own if you can get the same behavior with a single particle, doesn’t it?

Let’s assume for the moment it doesn’t. The Copenhagen response might be that the particle is obviously entangled with itself, no? So it communicating with itself either forward or backward in time, such that the waveform collapses just the right way?

If we can find a single particle, nothing special interpretation, such that two entangled particles could reasonably produce cos²ϴ as the extinction curve without requiring non-locality, we should be in good shape. It might also clear up as to what quantum entanglement might actually mean in more concrete terms, or maybe give some insight as to aspects of polarization.

# Why cos²ϴ?

I might have had reason to suspect that cos ϴ might have been the curve, but cos²ϴ does seem a bit unexpected. If one graphs cos²ϴ as a distance in the unit circle from the origin across values of ϴ, you get this:

That didn’t seem to bring me much in the way of enlightenment.

Now I started thinking, if this has anything to do with particle spin (some ERP experiments do use spin, and it behaves the same way), it may be the axis orientation of spin. If it’s indeed the spin axis, then there are two variables, the inclination ϴ, already mentioned, but also the azimuth φ. While the view as the particle comes at the detector can be thought of as 2d, the actual axis involves a third dimension. So what happens if we plot cos²ϴ on the unit sphere, but as the 2d distance from the origin? That is, as you are looking at the sphere from one of the axes? What do you get?

## Deriving the coordinate space

Modelling the coordinate space like so:

So working from that, given φ=0, it’s evident that:

x = cos ϴ, y = sin ϴ, z =0

For φ = 90, and ϴ = 30, it’s also evident that:

x = 0, y = 1/2, z = sqrt(3)/2

or

x = 0, y = sin ϴ, and z = cos ϴ

and lastly, for φ = 30, and ϴ = 0:

x = sqrt(3)/2, y = 0, z = 1/2

or

x = cos ϴ cos φ, y = 0 = sin ϴ, z = cos ϴ sin φ

Therefore for generalized ϴ and φ:

x = cos ϴ cos φ

y = sin ϴ

z = cos ϴ sin φ

From there, my thought was to plot points such that the ratio for a given ϴ across values of φ should be cos² ϴ, and see if a picture emerged that would make sense. At first, I thought to pick random values of x y and z and go that way, but in the polarization case, we fix ϴ, making φ free to be anywhere in range, whereby picking random xyz on the sphere would give a different distribution of ϴ and φ that would skew the ratios.

To clarify, particle motion would be along the z axis, theta is the angle of difference between the two polarizers.

So I do some monte carlo estimations of fixed theta, random phi.

Looking from azimuth φ = 0:

Well, well, well!

Looking from φ = 30 degrees:

So it would seem that *here’s* the hidden variable. This would
explain why two ERP particles would behave as they do with respect
to the angle. Because it’s not just the inclination, but the
azimuth too. The
two lines cross at exactly 90 degrees, the angles vary from up/down
by exactly 45 degrees. So at the same inclination, but different azimuths (or vice versa),
they might take different paths in the polarization crystal.

This picture very much fits the intuitive shape of how polarization would look like, right? This explains why cos²ϴ is in there. Just as a side note, I was looking as to how to plot a sphere, and saw this2

```
x = cosϴ cosφ
y = sinϴ
z = cosϴ sinφ
```

I nearly jumped out of my seat (the cos cos for x), as while I wasn’t sure I was going to get what I got, it did scream that it would likely bear fruit.

There’s math here I’d still like to do, but I’m not familiar enough with the tooling (Mathematica, Octave, Wolfram Alpha, Maxima, etc.) to figure out how to make headway. What I would expect is that if you take the sphere that’s there and compute the intersection of the red area with another sphere’s red area with either a known inclination or azimuth, that the surface area of the intersection would yield the cos²ϴ relation that’s observed. It seems obvious if you turn the ball 45 degrees clockwise (taking the 0 azimuth view), the intersection is half, which is what is observed. But for others, as I said, my math chops aren’t that good.

# Other Questions

Having seen that, I now wonder:

- Is polarization really about the
*wave*orientation, or is it that we’ve always thought of it that way because we had seen physical waves that behave that way? - Do waves have a 3rd dimension in their vibration?
- How would one model circular (chiral) polarization in a model such as this?

# Summary

So if we go with a neorealist interpretation, particles **do**
have a definite position,
polarization, and momentum when not observed. Quantum
superposition is saying “we don’t know”, and so we are still
limited by Heisenberg’s uncertainty principle. The spooky action
at a distance is just that the polarization of two entangled
particles is the same in both inclination **and** azimuth (or
exactly opposite depending on the precise implementation), which is why they take the same paths through the polarizing crystals. Malus’s
law conveniently explains Bell’s inequality, or if that’s not
enough, the graph above explains why Malus’s law is what it is.
The consequences of this is a
fully local reality (i.e. nothing super-luminal, nor non-local), as the results
of the ERP experiment can be fully explained. And the 3D plot
intuitively fits what polarization should look like.

I do wonder somewhat, with the big assumption I’m actually correct, why people went with the more exotic interpretations to explain things, when simpler ones seem to work, but I wasn’t there. Especially in the beginning. And some things are much simpler once someone figures it out, it can be obvious in retrospect.

So am I right? It’d be really cool if I was.

## But I’m probably wrong.

# Code, formulas, etc. so you can check my work

- cos²ϴ vs linear
wolframalpha
`plot cos^2(x), (1-(x/(pi/2))), x=0..pi/2`

- cos²ϴ in a circle
wolframalpha
`parametric plot [{(cos^3 x, cos^2 x sin x)},{(cos x, sin x)}], x=0..2pi`

- cos²ϴ on a sphere

```
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import pyplot as plt
from matplotlib import cm
import numpy as np
import time
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d', xlim=(-1,1), ylim=(-1,1))
# v inclination, u azimuth
u, v = np.mgrid[0:np.pi:100j, 0:2*np.pi:100j]
# make the sphere
x = np.cos(v) * np.cos(u)
y = np.sin(v)
z = np.cos(v) * np.sin(u)
# compute the 2d distance relative to cos² v from azimuth 0, elevation 0
diffs = ((x**2 + y**2) - (np.cos(v)*np.cos(v)))
# @ elev 0, azi 0, x = 1, y = 0, z = 0
# clip values to 0, -1 from 3 decimal places
pos_and_neg = (diffs * 1000).clip(-1, 1)
# map to seismic color map
my_col = cm.seismic( pos_and_neg )
ax.plot_surface(x, y, z,
rstride=1, cstride=1,
facecolors=my_col)
#ax.view_init(elev=0, azim=0)
plt.show()
```

# Footnotes

1 Maybe more Johnny Mathis-h

2 Actually, I derived it by hand before I found a similar formulation elsewhere online – hence why y and z are swapped from what you normally find, but it didn’t change the results, so I just left it as was.