LOG#219. Stranger Planck things.

“(…)Suzie, what is the Planck constante value?(…)”

Nice to see the quantum fundamental constant in any series, but only shallowly. The rest was awful Science, and I guess bad advisors (or even null advisors) where considered to the big scientific failure of Stranger Things III most epic moment. I am not counting the song as as failure since I love The Never Ending theme by Limahl. Anyway, to the subject…

Planck constant is essential in the new S.I. and also it is a fundamental constant of physics. It is related to the action functional and to the fundamental relationships of angular momentum and energy quantization. Originally, the Planck constant energy quantization emerged since the black body radiation was divergent. Planck (1900) introduced quantization of energy to solve the divergence. Later, Einstein used light quanta to explain atoms, and later, the quantization of the action was noted more fundamental that energy or angular momentum quantization. For instance:

    \[P= \sum_\gamma\exp(iS_\gamma/\hbar)\]

    \[E=\hbar \omega=hf\]

    \[J=n\hbar\]

    \[p=h/p\]

where \hbar=h/2\pi. hbar \hbar is the Planck constant divided by two pi. The measure of h (or \hbar) has evolved into time. \hbar or h are not a formula. It is a constant taking values in certain system of units. By the way, nothing is told about in which units (I suppose SI, but given the fact the TV series is american/English-bases, I had a fear to see a non-SI value too). Anyway, here you are the values of Planck constant through time:

1st. 1969. Here, h=6.626186(57)\cdot 10^{-34}J\cdot s.

2nd. 1973. h=6.626176(36)\cdot 10^{-34}J\cdot s. THIS is the value that should have been seen on the screen at Stranger Things III, given its time frame!

3rd. 1986. This is the future by the time happening at the Stranger Things III season, as they are the remaining values…Here,

    \[h=6.6260755(40)\cdot 10^{-34}J\cdot s\]

4th. 1998. Here, you see

    \[h=6.62606876(52)\cdot 10^{-34}J\cdot s\]

5th. 2002. At this time, I wonder why CODATA and NIST reduced precision here, but it seems it happened

    \[h=6.6260693(11)\cdot 10^{-34}J\cdot s\]

6th. 2006. Here, you find out

    \[h=6.62606896(33)\cdot 10^{-34}J\cdot s\]

7th. 2010. There, you get

    \[h=6.62606957(29)\cdot 10^{-34}J\cdot s\]

8th. 2014. This IS the value of Planck constant Suzie recalls (excepting he forgot the power of then and the units!) in the episode:

    \[h=6.62607004(81)\cdot 10^{-34}J\cdot s\]

Question: did Suzie live in the future or did she travel to the future to get the 2014 h value?

9th. 2018. The new S.I. exact adjustment (to be revised likely in the near future):

    \[h=6.62607015\cdot 10^{-34}J\cdot s\]

Remark: as it is and EXACT definition, no uncertainty is written between brackets!

Table with summary, even with previous values of Planck constant than those above:

Theories?

 

-Suzie is from the future (like Frequency, the movie!).

-Suzie is a time traveler (like Back to the future).

-Stranger Things writers have bad (or null) scientific advisors.

The most probable explanation is the latter, for me. Any other?

Comment: all of this can easily checked using internet quite fast. Go to the CODATA or NIST webs and check the values and the units, but please, recall the system of units we scientists use, and do not forget the power of ten if there is a scale above in the data table!

LOG#218. Atomic elephants.

Have you ever asked yourself why there is no elephants or humans with atomic sizes? Have you ever asked why Earth is so big in comparison with atoms? Why nuclei are 10000 times smaller than atoms? I will be discussing these questions in this short blog post…

1st. Why R_{Earth}>>R_{atom}?

Answer: It is related to the relative force between gravity and electromagnetism due to the Gauss law! In 3d space (4d spacetime) both, Coulomb and Newton laws are inversely proportional to the square of distance (spherical shells!). Then:

    \[\dfrac{F_E}{F_{a}}=\dfrac{R_E^2}{R_a^2}\]

and from this you get

    \[\dfrac{R_E}{R_a}=\sqrt{\dfrac{Ke^2}{GM_p^2}}=\left(\dfrac{e}{M_P}\right)\left(\sqrt{\dfrac{K_C}{G_N} }\right)=10^{18}\]

The reason is that the ratio of the electron to proton mass, and the square root of Coulomb to Newton constant is big. Earth is big, because of the nature of proton mass to electron charge, and the ratio of the coulomb to newton constant. Thus, you can not find atomic planets or atoms with the size of the Earth…Not go crazy and ask to change of the values of those constants…

2nd. Why R_{Elephant}>>R_{atom}?

Answer: Similarly, you can can compute the ratio between the the gravitational energy of an elephan and the electric energy of any atom:

    \[\dfrac{E(g)}{V_{El}}=\dfrac{E(atom)}{V_{atom}}\]

Then, you get

    \[\dfrac{R_{El}}{R_{atom}}=\left(\dfrac{GM_p^2}{K_Ce^2}\right)^{1/3}\sim 10^2(10)\]

Therefore, elephants or humans are constrained to a few meters…Great!

3rd. Why R_{atom}=10^{5}R_{nucleus}?

Answer: It shows to be the hardest to understand. The secret behind this answer lies in the Yukawa force and the exponential screening (short-distance) behaviour of nuclear forces that make them to be confined to a few proton radii (and of course, to the coupling constants). Take for instance the strong force case, then

    \[g_s^2\dfrac{\exp(-r/r_0)}{r^2}=\dfrac{K_Ce^2}{r^2}\]

then

    \[\dfrac{r_{nucleus}}{r_{atom}}=\sqrt{\dfrac{\alpha_s}{\alpha}}\exp(-r/2r_0)\]

Plugging \alpha_s\sim 1, \alpha0=1/137, r/2r_0\sim 5, you guess that the above ratio is

    \[\dfrac{r_{nucleus}}{r_{atom}}=10^{-5}\]

Fantastic! Allons-y!

Proton decay is expected naturally at some point between 10^{45}yrs or 10^{120} yrs from “standard assumptions” of virtual black holes or space-time foam expectations. It is inevitable to get some dimension 5 operator for neutrino masses in the symbolic form (LH^+LH^+)/M at about 10^{10} or 10^{14} GeV, and leptoquarks triggering proton decays at about 10^{16} GeV even without the avove quantum gravity general prediction. There are also interesting dimension 6 electric dipole operators for neutrons and other neutral particles at scales about 30 -100 TeV! The LHC is hardly sensitive to these coupling, beyond non direct measurements, but future colliders at 100 or 1000TeV (the Pevatron!) could test the flavor changing process due to kaon-antikaon systems. Much more subtle is the issue of Higgs mass, the baryon and lepton number symmetries, and the sources of CPT violations we have already investagated the past and current century. It is a mess nobody understands yet fully. We have understood the kinematics and dynamics of spin 1 and 1/2 particles, even shallowly explored the 2012 Higgs (spin zero) discovery! Higher spin particles? Spin two is gravity and surely it exists, after the GW observations by LIGO and future observations. There is the spin 3/2 particles, expected from general supersymmetry arguments. Spin 5/2 or spin 3 and higher interactions are much more complicated. Excepting some theories of gravity as hypergravity, Vassiliev higher spin theory, and higher spin theory coming from string/superstring theory, are really hard to analyze. In fact, string theory is an effective theory from higher spin theory, containing gravity! But the efforts into the higher spin understanding of higher spin spin theories are yet to come. We are far from a complete view of higher spin theories.

What about the LHC expectations? Colliders are artifacts whose performance is quantified by a quantity called luminosity. Luminosity is related to elementary cross sections from particle physics, and the number of events per unit time is essentially luminosity times cross section…

    \[\dfrac{N_E}{Time}=\mathcal{L}\cdot \sigma\]

For the LHC, \mathcal{L}\sim 10^{34}cm^{-2}s^{-1}, and SM cross sections are measured in barns 1barn=10^{-24}cm^2=10^{-28}m^2. The LHC works at 14 TeV, 1TeV^{-1}\sim 10^{-19}m. Typical electromagnetic (strong) interactions scale like \sigma_{em}=\alpha/(1TeV)^2, and thus the cross section is about \sigma\sim 10^{-36}cm^2=1pb, where pb denotes picobarn. Data acquiring is measured in terms of inverted barns (integrated luminosity over the number of events). Independently you calculate amplitudes with Feynman graphs, the amplituhedra or any other gadget, these things are surely universal. Neutrino interactions are related to the Fermi constant or the M_W masses, and i f you go to search for some universal principle for scattering amplitudes, you will find out that on general grounds, for spin one consistent theories provide you polarization sums of type \sum g_j=0, and for spin two \sum g_jV^\mu_j=0. There are likely issues at loop corrections, but surely the universal laws of physics should conspire to be coherent in the end even with loop corrections. How? That is puzzle.

Finally, let me show you how to calculate the time a photon needs, in the sun, to escape from the sun. The sun is a dense medium, so photon interacts with atoms and dense matter in the core before popping out from exterior shells and arrive to Earth.

Data: solar density at the core \rho_c=150\times 10^3kg/m^3. Opacity is \kappa\approx 0.2kg^2/m.

The mean path is L=1/\kappa\rho_c=3\times 10^{-5}m. Solar radius is about R_\odot=7\times 10^{8}m. A random walk of the photon in N-steps yields d=L\sqrt{N}, so if photon survivies, it takes for d=R_\odot a number of N=(R_\odot/L)^2 steps. Finally, the total distance traveled by the foton is D=NL, so D=R_\odot^2/L.  The time takes a photon to travel D meters is obtained dividing by the speed of light, so

    \[t_\gamma=\dfrac{R_\odot^2}{Lc}\]

Plugging numbers

    \[t_\gamma=\dfrac{49\cdot 10^{16}}{3\cdot 10^{-5}\cdot 3\cdot 10^{8}}\sim 5\cdot 10^{13}s \sim 1\cdot 10^{6}yrs\]

So, a photon scape from the sun interior to its surface in about 1Myr. If you enlarge opacity you would get a higher number, and if you let the mean path to go greater, decreasing opacity (matter transparency!) due to density effects, you could get escape times of about 10^3-10^5yrs. Note that if photon did not interact, it would escape from the sun in a few seconds, like a neutrino!

May the atoms be with you!

LOG#217. The 2 pillars.

Time of Cosmic Voyages https://www.youtube.com/watch?v=xEdpSgz8KU4

This is a long post, despite not being a special post (remember I make one of those every 50 posts). What are we going to see here? A five part post!

Part(I). A descriptive prelude.

  • Limitations of our electromagnetic observations? Reductionist vs. holistic visions? We were long ago limited by electromagnetic visible observations. No more today. The end of the reductionism with atomic or quantum physics seems to have reached a limit. Quantum entanglement and contextuality change the global view. Complementary holds.
  • Artifacts. Particle physics uses wonderful detectors and colliders. From cyclotrons, particle accelerators, synchrotrons, others. We will see why this whole stuff matters for your daily life and health.
  • Game rules. Relativity plus quantum mechanics. The two pillars of physics (central topic of this post!). Matter and energy interplay via E=mc^2, wavelength versus momentum interplay via duality \lambda=h/p.Precision and accuracy from these two big pillars make difficult a futher unification.
  • Uncertainty principle. Concentrated energy as a microscope up to scales \Delta X\sim \hbar c/\Delta E, L\sim 1/p. There are some generalized uncertainty principles out there.
  • From the 150 years old Periodic Table to the cosmic roulette of particles. Is a 1/7 reduction of the numbers of elements good enough to be kept forever and ever?
  • Action is quantized! That is the hidden mantra of Quantum Physics. Energy quantization or momentum quantization are secondary. The key magnitude is action (actergy!). Forget what you think about quantization a few seconds. Quantization main object is action (actergy). Other quantizations are truly derived from the action.
  • What is the relative force of the 4 fundamental forces? Compare them to different distance scales. See it below these lines.
  • QCD mass versus Higgs derived masses. You will learn (or remember, if you already knew it) that the mass of your body is essentially a QCD effects. Spoiling the Higgs, not make you massless. Higgs particles via the Higgs field and spontaneous symmetry breaking only give masses to elementary known particles. Of course, Higgs field is important since it manages atoms to become stable and bound as well,…Otherwise electrons would be massless. BUT, the proton mass is, as many other hadron masses, a QCD effect. Why nature chose to keep this accident? Well, it is fortunately for us, since atoms and complex objects depend on proton stability (or long-lifetime metastability), but I am not a big fan of the anthropic principle or why the laws of physics are so well tuned to life to emerge.
  • Protons are complex objects. Textbooks, circa 2020!, seems to be obsolete like the Terminator! I mean, how many of you keep thinking protons or neutrons like solid big balls instead of wibbly wobbly timey wimey stuff? Just joking, whovians. I know you know what the time vortex equation truly is. Are elementary particles balls?
  • Fine tuning of parameters, stars and the origin of the elements from the primordial nucleosynthesis at the early Universe.

The SM is formed by the following set of elementary particles (up to more complex counting systems including charges and polarization modes!):

17 particles only! Usual atoms are made only of the first generation as good approximation, so you passed from 118 (or more) periodic table elements to 17 particle types! That is a factor 7 reduction! A cosmic wheel way of representing these particles also exists:

Moreover, extremely short timelifes are for the Higgs and the top quark. Using the relationship between energy(mass) and lifetime, you get that Higgs bosons or top quarks life about 10^{-25}s or 10^{-26}s, that is about 0.1 or 0.01 yoctoseconds! However, we are far away from Planck time physics (about 10^{-43}s). The above circle seems a pokeball. Anyway, it is a tryumph of reductionism. Everything reduced to combinations of those particles. By the way, the Higgs particle determines at what distances particles interact. Particles that are Higgs-transparent are massless and they act on infinitely large distances. That is the case of electromagnetism and gravity. Subtle point, gluons, despite being massless, are confined into hadrons due to non-abelian features and to confinement. Top quarks interact strongly with the Higgs. strangly very strongly. That is why tops are heavy. Similarly, electroweak bosons W,Z also interact strongly with the Higgs, but a few lesser than the top, and get about 100GeV masses. Particles interacting more with the Higgs field and the higgs particles, are thus more massive. The Universe of particles is being explored right now at the LHC with 14TeV smashes. We see particles everywhere. From subnuclear distances to scales about the zeptometer at the LHC: Any particle is tagged with some particular properties called quantum numbers. Mass, electric charge, angular momentum and spin, parity, weak charges, hypercharges,…Note that you are made of a big number of particles. Being about 70kg, supposing protons are what make you massive, you are about 10^{29} proton units composite.

Fields in the continuum and particles in the discrete are not contradictory views. Particles are just the excitations of the fields. The usual picture of continuum versus discrete view of the Nature is turned into a much more complementary unified view today. Matter-energy and spacetime are made of FIELDS. There are only a few fields in the Universe, maybe manifestations from different mirrors of a single field (this is the unification dream, the final theory treating everything as a single force and field). Kant, Einstein, Faraday, Maxwell, Newton, Leibniz and many others have taught us a lot about these visions. During the 19th century, a new formulation of classical physics was built. It is a non-deterministic approach. It is based on lagrangian and hamiltonian dynamics. Systems are encoded into gadgets called lagrangians (or lagrangian densities in field theory). They overcome the limited and sometimes hard to apply newtonian dynamics \sum F=Ma. In newtonian dynamics, the problem to understand the Universe is reduced to understand what mass is (unanswered in newtonian physics!), and to know what are the forces of the Universe. In the rational mechanics lagrangian and hamiltonian methods, you are reduced to find out the symmetries of the problem, and to compute the minimal action such the equations follow from a minimal (or more generally a critical point) of the action functional. For first order lagrangians and classical hamiltonian dynamics, the equations read

    \[\dfrac{\partial L}{\partial q}-\dfrac{d}{dt}\dfrac{\partial L}{\partial \dot{q}}=0\]

    \[\dfrac{\partial \mathcal{L}}{\partial \phi}-\dfrac{d}{dX^\mu}\dfrac{\partial \mathcal{L}}{\partial \left(\partial_\mu \phi\right)}=0\]

for lagrangians of particles and fields respectively, and

    \[\dfrac{\partial H}{\partial p}=\dot q\]

    \[\dfrac{\partial H}{\partial q}=-\dot p\]

for hamiltonians of particles or fields. The first set of equations are named as Euler-Lagrange (EL) equations, and the latter are the Hamiltonian Equations (HE). The problem to compute the forces are turned into the problem of finding or guessing the lagrangian or hamiltonian, and the dynamics reduced to the understanding of the symmetries via the action principle and the Noether theorem. What is the problem with all this? The quantum mystery is just a mystery of the vacuum. Vacuum mysteries around you. Aristotle had a notion of purpose in his view of Nature. The act. With Galileo, we learned that there is act-less motions! Galileo refuted Aristotle view of the motion. That was further mathematically developed by Newton, Leibniz and others. What Galileo did experimentally and very naively matemathically, Newton and posterior scientists would do it precisely. However, it turns that classical physics, your own perception of the Universe is biased. Classical physics is only an approximation. That was indeed anticipated by the EL and HE approaches, and the action principle, where all the possible configurations in principle are there, but only one (the classical one by reasons we will see later), are tested by Nature. That Nature tests everything is indeed the main argument of quantum physics! The usual non-deterministic view of Nature in quantum physics is surely a bit deeper than the eventual EL or HE approaches to classical mechanics, and have stunned everyone since then. But it is true, at least, with the precision of our current experiments. Nobody knows the future for sure, but quantum mechanics and non-deterministic probabilistic statements are here to stay a long time (if not forever).

What is the point with quantum physics? Well, take for instance the neutron decay process

    \[n\rightarrow p+e+\overline{n}\]

It can happen, if free, in about 15 minutes (888 seconds more or less). Quantum physics tell you that you CAN NOT predict when the decay is going to happen. You are only allowed to ask by the probability of neutron to decay in certain time period. Particle decays are essentially quantum phenomena, and statistically poissonian. You can not predict when something will decay, but knowing some distribution of probabilities, and statistics, you can predict probabilities for events to happen. Thus, quantum physics is just a framework that tell you “probabilities of events”, instead of telling you what is going to happen, you can only ask what is the probability of something to happen? Of course, the caveat here is that even quantum mechanics has tricks…Under certain circumstances, you can find out imposible events or even sure events. The sun is surely wasting its hydrogen fuel. Quantum photons arrive to us thanks to quantum physics (the tunnel effect, essentially). NO matter if you hate the philosophy of quantum mechanics. It allows us to exist and live.

Well, can we do average? Yes. QM let you guess quantum averages from classical and quantum observables. Vacuum polarizes, and you can study particle production processes, even with antiparticles. Accelerator and colliders show that quantum fluctuations are not just bubbles. It is just like a mesh of beams spreading out and interacting with everything in the sorroundings. Quantum physics, then, say that every possible trajectory between A and B happens for quantum particles. Between A and B the states are undetermined (unless you measure, of course) and likely entangled. Quantum physics, in terms of M. Gell-Mann, is the supreme totalitarian theory. Gell-Mann’s totalitarian principle stated originally that everything that is not forbidden is mandatory, today it has been reformulated and upgraded.

Totalitarian QM principle: everything that is not forbidden can happen or will happen.

This principle is essential to have a broader view of what QM is (even if not completely understood for you or experts!). Moreover, any fundamental interaction in the SM has a vertex Y structure (giving up loop corrections) in its simpler terms.

How could we understand the totalitarian principle and the action formulation of classical AND quantum mechanics better? Let me begin stating that Feynman graphs, even if complex, are a useful part of modern physics. Quantum fluctuations and force fundamental interactions are usually represented by Feynman graphs. They represent events in spacetime. Just as SR represents relativistic events as light-cones or space-time diagrams, particle physicists use Feynman graps to model fundamental physics and interactions. Some examples from the SM:

That particles follow A SINGLE history is past. They follow up all the possible histories at once. Thus, Laplace’s dream of a machine predicting the ultimate future of the Universe can not accomplished totally in a QM world. We can only say what are the odds of our future…In fact, QM has to use approximations and statistics since mouses, men and women, elephants, or big things are composite from many particles. Predictions would become IMPOSSIBLE without an statistical and probabilistic approach. Certainly, it is also possible you will know the bayesian approach to probability and science. Well, QM is the ultimate expression of bayesianism in the scientific world. Of course, you can check that some statements are TRUE and FALSE. But only, with the precision of your experimental set-up and current theories or hypothesis. In summary, record this strongly: QM says only probabilities or decays, not when decays will happen…

What about the action principle? Action principle gets enforced in quantum mechanics. In fact, the reason why we “see” the world as classical is twofold: Planck constant is small, and classical trajectories, those who are minimal, provide the main weight to the quantum action. The classical action is a magnitude equal to mass times the proper time, or energy times the proper time (with c=1), modulo a minus sign:

    \[\mbox{Action}=-M\tau=-(\mbox{Mass}\cdot\mbox{Propertime})\]

The quantum amplitde is something like the sum

    \[A(i\rightarrow f)=\exp(iS_1/\hbar)+\exp(iS_2/\hbar)+\cdots+\exp(iS_n/\hbar)\]

This sum is a complex number! The big thing is that non-dominant non-classical paths “cancel” (or almost), and you are left with the minimal action principle of quantum mechanics. It is not that the other trajectories do not happen, it could happen. They are orthogonal and interfere destructively, the classical path is reforced or enhanced by quantum interference!  There is a simple gedanken experiment (sometimes even simulated via applets in some websites). Take a light beam projector with low intensity I. Take a device that counts the number of clicks when photons arrive. Of course, this depends on \lambda and \nu, the wavelength and frequency f of the light. You can count the clicks as they hit a sensor in the screen. Sometimes the light pushes 36 clicks, other times it pushes 16 clicks. In deed, you can write down a formula giving you a discrete relationship between the numbers of clicks and the path the photon followed to the detector, something like this

    \[N=\sum_{paths}\vert\pm 1\vert^2\]

The indeterminism is just result of the interference of the paths! But in this simple example, every trajectory has the same absolute value. Going further, the plus or minus one is due to the phase of a complex number. the action is A_1=-E_1t_1, A_2=-E_2t_2,…A_n=-E_nt_n in general. Divide the action by \hbar to get a pure dimensionless number, multiply by i and then get it exponentiated, so you calculate \exp(iS_j/ \hbar). The probability of any event is given by the sum over all possible path

    \[P=\vert \mathcal{A}(i\rightarrow f)\vert^2=\vert\sum_{\gamma} c_\gamma e^{iS_\gamma/\hbar}\vert^2\]

It shows that this probability is maximized with the critical action, that is the minimal action principle of classical physics. Equivalently, think about paths vectorially! The sum is optimized for classical trajectories, maximal action, minimum time or the shortest paths. Have you ever imagined to be quantumly teleported or abducted by an alien civilization to another galaxy? It could happen spontantenously too, but with very very low probability, despite that event is allowed by the laws of quantum physics. Unfortunately, there is no alien close to me to show me the Universe beyond our galaxy more directly.

Part(II). QFT: a tale of where we stand in high energy physics.

  • Vacuum is the main fundamental object in quantum field theory. It is indeed related to the Fourier expansion of field operarors

    \[\phi=\int\dfrac{d^3x}{(2\pi)^3}\left(a(p)^+e^{-ipx}+a(p)e^{ipx}\right)\]

The quantum realm is the world of the quantum vacuum or the quantum void. Call it voidium if you want. Quantum fluctuations allow you to even surpass the conservation of energy by tiny time shifts (commit a robe, nobody would notice if you return the money before the shop and the register machine open again; that is the essence of the uncertainty principle, everything that can happens will happen, everything? Well, not quite,but this is again another story).

  • What are the symmetries behind the known possible fields and interactions?

There are no forces in quantum physics, indeed, there are only interactions betwen particles. The formal distinction of interactions is just, likely, a misnomer to different classes of phenomena, but it is useful yet at these times. Why? Because if matter and energy are, well, related with E=mc^2, distinctions between mass or matter and energy are quite a question not of mass or energy, but other features, indeed related to the quantum world. Magic word: spin! Particles have angular moment or, internal rotatory properties we call spin. Fundamental forces are transmitted by entire-spin particles, matter fields are spin one-half particles and spin zero or one particles.

  • Bosons and fermions as glue and LEGO pieces making up everything. You and me are not so different after all.

Spreaded out from the origin of time at t\sim 10^{-43}s til the current 14Gyr\sim 10^{18}s, or about ten to the 61 Planck times, the Universe is big, as far as we know 10^{26}m bit, compared to the 10^{-35}m at its birth, ten to the 61 Planck lengths big. However, it contains atoms, planets, comets, stars, galaxies,…Different scales of masses…You are only some tens of kg, planets or moons are 10^{20} times that thing more or less. Stars have also different spectra. You can find out stars with hundreds of solar masses yet in the Universe…Compact objects (even if stellar) are weirder but also existent. Take a solar mass and reduce it to the size of a continent. You get essentially a white dwarf star. To the size of a city, you get a neutron star. If you could compriss them further, you would get black holes. Black holes are indeed the most massive objects we can find in this Universe, beyond our Universe! The Universe has a mass of about 10^{53}kg. Its density is very low, about the vacuum energy density, of about 10^{-27}kg/m^3 (Planck density is about 10^{97}kg/m^3) or 1 proton per cubic meter. Outer space is vacuum, basically. In the other side you could have neutrinos, the less massive particle of the standard model (we do not even know their exact masses!), with about 10^{-39}kg. So, masses in the Universe, in hierarchy we do no understand, are distributed over 92 orders of magnitude, even more if you consider that dark energy could be some king of ultralight particle. Observed scales, not going to Planck scales, are separated by say 44 or 47 orders of magnitude in distances. Currently, the universe has a temperature about 3 K, when likely it began with Planck temperature, 10^{32}K. Old limitations of optical and telescopes were overcome. We have now new tools to observe wavelengths the human eye can not see unaided. Likely, machines will be showing us new ways to see the Universe we have already explored and initiated recently. Gamma rays, radioastronomy, neutrino astronomy or gravitational wave astronomy will the new powerful tools of the future for sure. Electromagnetism is also doomed from cosmological observations (at least in a one-time isotropic, homegenous time coordinate). The limit is the CMB. Beyond that, we will have to use neutrinos or gravitation. We can not see the very early universe before primordial recombination with photons.

  • QFT=Special Relativity+Quantum Mechanics=SR+QM. The first results of this fusion is the existence of antimatter (however the known Universe contains a very very low quantity of antimatter, fortunately for us!).

Rule games by relativity and quantum mechanics:

    \[E_\gamma=h/p\gamma=hc/\lambda_\gamma\]

    \[E^2=(mc^2)^2+(pc)^2\]

    \[E=\dfrac{h}{p}=\dfrac{h\gamma}{mv}\]

Relativity=Invariance under the Lorentz or Poincaré symmetry groups. You can classify particles with some numbers, just like you classify elements of the periodic table.  When are relativity and quantum important? Look at this plot size (energy) vs. velocity plot:

  • Leptons and quarks. What are their properties? That means to known about quantum numbers. Quantum numbers of elementary particles include  (rest, invariant) mass, angular momentum (spin), parity, electrical charge, hypercharge (weak charge) and sometime chirality or polarization degree (L or R for usual left-handed or right-handed polarizations).

  • We remember the 3 generations of the SM. But what is mass? Firstly, elementary particles get masses from the Higgs field. BUT, protons that make you and hydrogenated atoms and stars, or heavy nuclei, get masses from the strong force! The so called chiral symmetry breaking is the way in which hadrons get masses. Note that a proton is about 1GeV/c² of mass. But constituent u,u,d quarks are 2.3+2.3+4.8=9.4 MeV. So, where is the remaining proton mass? Hint: the proton is much more complicated than this naive 3-ball picture. Plug about 1 fm into the E-x uncertainty principle: E=hc/\lambda=10^{-26}/10^{-15}J, that are about 10^{-11}J of fluctuating quantum stuff. Protons can can not be imagined like a three-ball coconut. A proton is instead a result of the QCD vacuum! So, the proton is (uud)+(gluon kinetic energy)+(particle+antiparticle hadrons)+…Any kind of rubbish object. What is a proton then? Well, something like this will surprise you:

Mass from QCD is a highly non-trivial process. Indeed, some time ago, that process was called (I think it is yet called so, but it is horrible as name) dimensional transmutation or nonperturbative mass. Yet, 1% of the proton mass come from the Higgs field. You can possible compare this to residual electromagnetism in you daily life. Why walls do not drop off on your head? Electromagnetism is strong compared with gravity at common scales. 10^{42} times  stronger than gravity. BUT, electric charges compensate to each other, except, some residual forces, the Van der Waals forces (and some ionic or covalent variations), and you do not killed by walls thanks to electromagnetic residual forces from chemical bonding!

  • Hard part: Q.E.D. as quantum electromagnetism and how to intuitively get a picture of the quantum fundamental rule \mathcal{A}\sim\vert c_\gamma\exp(iS/\hbar)\vert^2.

Action principle comes from free in a lagrangian formulation. The totalitarian principle applied to strong fields (in both curved or flat spacetime) implies other incredible result in field theory (yet to be experimentally tested). Strongs fields CAN create particle-pairs. This is the Schwinger effect and it can easily derived from the action principle. From purely energetic views of SR and QFT, turning on a big enough electric or magnetic field you could create any suitable particle-antiparticle pair. For electrons in QED, you would get a critical field:

    \[E_c=\dfrac{m^2c^3}{e\hbar}\]

    \[B_c=\dfrac{E_c}{c}=\dfrac{m^2c^2}{e\hbar}\]

The values of these fields is very big. E_c\sim 10^{18}V/m and B_c\sim 10^{9}T. QED is wrong at very large energies, but electromagnetism and weak forces are unified at about 100GeV energies. Weak interactions are essential to understand radioactivity and how some particles “change identity” or flavor. This turns to be necessary for stars to exist. The proton-proton process giving rise to the stellar fusion is energetically possible, accidentally, and it is another surprising fine tuning of the SM:

    \[pp \rightarrow D+D\rightarrow ^3He\]

    \[^3He+^3He\rightarrow ^4He+p^+p^+\]

and the CNO cycle are not possible without the nuclear weak and strong forces observed features. However, we do not know why we observe 3 copies of particles with identical properties excepting masses.

  • Gauge symmetries. The fundamental \Psi'=e^{i\alpha}\Psi global versus local gauge symmetry transformations. These transformations do not change the physics and determine the interactions via the gauge field A=A_\mu(x)dx^{\mu}.

Quantum electromagnetism of QED is the result of the complex field and the QM structure of the vacuum. Wave functions are not directly observable. The can be partially observable due to phase shifts (Bohr-Aharanov effect) or via the Born rule. Majestic: you can only calculated the probability of field distributions in space-time. Quantum fields are generally complex-valued objects, and you get probabilities from amplitudes using the rule P\sim\vert\Psi\vert^2. That is. Physics do not change if you multiply, in QED, the wavefunction by a global phase

    \[\Psi'=\exp(i\alpha)\Psi\]

If you now turn the phase local, instead of global, that is, if you allow the phase to be changing in space-time as well, you are forced to introduce a new field if you want to recover invariance. This field is, for th U(1) case above, the electromagnetic field, A_\mu. This gauge symmetry determines the structure of interactions. And it can be generalized for non-abelian fields, like those required by weak and nuclear forces. Gauge symmetry tells you if you can forbid or allow certain interaction terms in the lagrangian device! Gauge symmetry also determine how the interaction arise between photons, electrons, W and Z boson, gluons (gluon and QCD are the exotic beast here, since they contain self-interactions not seen in weak or electromagnetic interactions).

  • The running of the “fundamental constants”. Due to quantum effects, the vacuum itself is not static, it changes. It polarizes. The amazing consequence of this over usual fundamental physics is that fundamental constants are not constant anymore. That is,

    \[F_e=K\dfrac{Qq}{r^2}\]

K is NOT constant, and thus \alpha is not constant. The polarization of the vacuum makes vacuum permittivity variable. Thus, \alpha=\alpha(r) or equivalently \alpha=\alpha(E).At nuclear distances, about 1 fm or femtometer scale (about 10^{-18}m), the usual fine structure constant is not exactly 1/137. At the LHC, in fact, at energies about 7TeV, the fine structure constant is about \alpha\sim 1/100. So, the running of the “constants” is slow. Indeed, it is a logarithmic variation ruled by the so-called renormalization (semi)group equation and the so-called beta function:

    \[\dfrac{dg(\mu)}{d\ln\mu}=\beta(g(\mu))\]

Note the differences and similarities. In a quantum world, charges and masses get dressed or renormalized due to the quantum fluctuations and the Heisenberg principle. Vacuum polarizes, and in the case of QED, there is screening in the coupling constant, increasing its value (the opposite effect happens in QCD, there is antiscreening).

The fine structure constant gets bigger with distance, or equivalently, plotted against the logarithm of the distance, alpha decreases with increasing distances, or it increases with the decreasing of the distance. The vacuum is a nasty object in QFT. You can visualize vacuum bubbles or loops from particle-antiparticle virtual particules popping out from the fundamental amplitude:

    \[A=\vert-+-O-+-O-O-+-O-O-O-+\cdots\vert^2\]

and possible many weird loops with subloops and topologies can also arise due to the symmetries of given interactions. For our 3 interactions, we get the SM as gauge theory. Quarks are tied or glued by gluons. There are 3 color charges. There are two electrical charges (one hypercharge) and 6 flavors for both leptons and quarks. Baryons are 3-quarks composite objects. Colors R (RED), G (GREEN), B (BLUE) and anticolor \overline{R}, \overline{G},\overline{B} making quarks must be colorless, since isolated quarks are not observable, every particle must be colorless. The weak interaction (or the electroweak interaction at scales of 100GeV) allows the change of flavor and weak charge.

The ideas above, of renormalization, vacuum polarization, undetermined intermediate states (virtual), running coupling constants, Feynman graphs, are iterated for the 3 interactions different from gravity. In QED you get 1 photon, in electroweak three vector massive photons and you get 8 gluons in QCD. There are 8 colorless ways to produce particles without color plus one extra colorless combination, any strongly interacting quark. Every SM particle has no color (excepting quarks), electric charge, flavor (weak charge) and hypercharge plus spin. Every particle, excitation of a single field, can be seen as wave perturbations in the field. The SM imposes:

  1. A simple gauge symmetry for the L part in (e,\nu), (u,d), the first generation.
  2. Gauge invariance and compensating field trick for any fundamental interaction.
  3. Optional (mandatory): mixing of generations is possible.
  4. Photon and gluons are massless, the W,Z, and H are massive by construction.
  5. Massive W,Z are problematic from the gauge theory viewpoint. This is what generated the creation of the Higgs field and the SSB mechanism. W, Z are exponentially supressed, and thus are unstable, decaying in short times.

Every particle has “polarization” modes, generally denoted by L and R. The SM is a theory for the electroweak theory plus the strong force (quantum chromodynamics, QCD) part explaining nuclei and hadrons. Recipe (oversimplification):

  • The SM is the Glashow-Weinberg-Salam model based on the gauge group G_g=SU(3)_c\times SU(2)_L\times U(1)_Y.
  • Electroweak forces are mediated by photons \gamma (massless) and gauge bosons W^{+},W^{-},Z.
  • The SM mixes particles between different generations and particles inside generations. The first generation comprises the main matter of the universe and it is stable (u,d),(\nu_e,e). Other two replicas of the first generation do exist. Why? Nobody knows for sure.
  • The SM does NOT contain gravity, negligible for particle interactions at the subscale for all the main circumstances.
  • That photons or gluons (the glue of QED and QCD) remain massless is due to the particular structure of the SM. The masses of the Z and the W are obtained like the other fundamental particles, via the Higgs field interaction.
  • SSB is just like a process similar to superconductivity and collective orientations of atoms/spins/particles in condensed matter systems. The fact that the Z-boson or the W-bosons are massive made weak and electroweak interactions short-range, unlikely gravity or usual electromagnetism.
  • The Higgs mechanism is a two-part device or gadget: it contains the SSB (spontaneous symmetry breaking) tool, and the dynamical part via a Higgs self-potential.
  • What is the precisions of the SM? About 1 part in 10^{12} in some cases! It rivals GR precision too! The magnetic moment of the electron measurements are such a precise measurements (anomalous for the muon case, a long standing problem in particle physics pointing out a BSM theory just like massive neutrinos). of (g/2)_{th} vs (g/2)_{exp}.
  • Open problems in the SM: nature of the Dark Matter (it can not be standard known SM particles), strong CP problem (why there is no electric moment of the neutron?), the hierarchy problem, the naturalness problem, anomalous magnetic moment of the muon, neutrino oscillation patters are very different from quark mixing patterns (via the different measurements of the CKM and PMNS matrices due to Cabbibo-Kobayahsi-Maskawa and Pontecorvo-Maki-Nakagawa-Sakata), why there are almost no antimatter in the Universe, the flavor problem (why 6?why 3 generations?), the nature of the QCD resonances, the early Universe picture of the particle physics from the SM particles (in particular the EW phase transition), the properties and nature of quark-gluon plasma.

The SM gives self-consistently a solution to the “problem” of how to get Z-W boson masses without spoiling the local gauge invariance of the fields. Mathematical details will be provided later. The Higgs mechanics is a two piece machine: a) breakdown mechanism, b) Higgs field dynamics (at least from a conservative viewpoint) are fully included in the SM. And, being more precise, the SM gets a maximal precision in the magnetic moment of the electron measurements. Compare the theoretical prediction (supercomputers and high calculus needed to compute it!):

    \[(g/2)_{th}=1.00115965218113(86)\]

with the experimental result

    \[(g/2)_{obs}=1.00115965218073(27)\]

Let me point out these results are not the last data or theoretical predictions. There is certain tension betwen theory and experimental resuls but it is not a huge one. 12 decimals of precision is quite a thing. Imagine to know the distance to the sun with such a precision.

By the other hand, there IS a big one point where new physics does arise in the SM. Neutrino sector. We do know from the last years of the 20th century and from current neutrino beams, solar neutrino experiments, reactor experiments and from cosmological data a lesser hint, that neutrinos are special. Neutrinos are not only the smallest chunks of matter you can get in the SM (yet, their concrete masses are not known!). Neutrinos are transgender or travestis! Neutrinos come in 3 flavors or species (at least, the SM neutrinos, theories do exist with more than 3,…Why 3 light neutrinos? Why left-handed?). It showed that neutrinos can transform into the 3 types when travelling long distances! In fact, there is a similar phenomenon inside hadrons. Quarks also mix! Transitions between neutrino types are modeled by a gadget called PMNS matrix, or neutrino oscillation matrix. Formally, there is also a CKM quark mixing matrix too. Mathematically:

    \[\vert\Psi_\nu\rangle=U_{PMNS}\vert\Psi_m\rangle\]

    \[\vert\Psi_q\rangle=U_{CKM}\vert\Psi_{Q}\rangle\]

Experimental data shows you that U_{CKM}\sim \mbox{diag}(1,1,1), while the neutrino mixing matrix is something much more complicated, somethin with entries like this:

    \[U_{PMNS}=\begin{pmatrix}\square & \bullet & \cdot\\ \circ & \bullet & \square\\ \circ & \bullet &\square\end{pmatrix}\]

The CKM is more ore less diagonal, but it also seems to have substructure

    \[U_{CKM}=\begin{pmatrix}1 & \square & \circ\\ \square & 1 & \cdot\\ \circ & \cdot & 1\end{pmatrix}\]

Furthermore, it seems that somehow mixing angles of these two matrices are complementary to each other, approximately it seems that \Theta_{CKM}\sim\theta_{PMNS}+\pi/2. Nobody knows why, and the third mixing angle of the PMNS matrix, \Theta_{13} was recently (a few years ago) measured. Also, there is some hints of CP-violations (naturally expected from the SM) in the PMNS matrix (something that is well-tested in the quark setting). Did you know we have more neutrino unknowns? Neutrinos are the ONLY fermionic fundamental field in the SM. Beyond knowing its mass, we do not know yet if its spectrum is normal (atomic-like) or inverted. We do not know if neutrinos are Dirac or Majorana particles. That is, neutrinos could be the only fermion in the SM that are their own antiparticles (a very bosonic trait!). Why does it matter? Well, if neutrinos are their own antiparticles, we could in principle understand why there is no almost antimatter in the observable Universe. To explain it, we should be able to know how to cancel out matter and antimatter with a difference of 1 part in 10^{10} such as 10^{10}\neq 10{10}+1. Otherwise, the whole Universe would be very different or it would not exist!

Well, time to go with asymptotically freedom, quark confinement and gluons…

Part(III). A short guide of QCD. From quarks to quark-gluon plasmas.

  • Main two features of QCD: asymptotic freedom and confinement. Asymptotic freedom is just the contrary behaviour of the screening in QED. Strong coupling decreases with decreasing distances! That is very antiintuitive. We are familiarized with forces that increase with distances, strong forces are different. At high energies, short distances, you are essentially “free” of strong force. Subtle. Specially since we call strong force, strong force. Confinement is the weird feature of strong forces making free quarks (or color charges) invisible. You would ask then how do we know quarks exist afterall if they do not exist as isolated objects. The main prove, beyond all the QCD evidence, is the jet structure we get from particle collisions. No free quarks sorry, but quark bunches! Wibbly wobbly quarky quark stuff.
  • Hadrons come into groups: baryons and mesons. Multiquark states for N>3 and even quarkless states (e.g., the glueballs or gluonium) are a known hot topic in QCD.
  • The model of quarks and partons. Partons were introduced by Feynman and Bjorken even before the quark theory was finished. Protons have complex structure. At low energies, you can naively imagine hadrons as valence quarks, but at higher energies hadrons are made from valence quarks PLUS other wibbly wobbly timey wimey stuff. Oh, yes! Microscopically zooming a proton is a fantastic journey itself.
  • The strongness of strong interactions. Why are the strong forces the maximal forces? Well, fortunately for your atomic nuclei and protons, it is so.
  • Quark-gluon plasmas (QGP). At billions (american) of kelvin, you get a wild soup of quarks and gluons unbounded. This is the quark-gluon plasma. It behaves as a perfect fluid.

The Manhattan project, I think you do not know this, provided a fund for particle physicists during and after the second World War. In the 1950s, many investigations provided a surprising subatomic subnuclear world. Many types of particles and resonances arised. Just like Mendeleive built it a periodic table for elements, particle physicists had to creat a big frame for the world of hadrons they were discovering. A famous joke or affirmation those years was that the Nobel Prize was associated to the discovery of new hadron states/particles. Murray Gell-Mann (RIP), and Neemann (RIP), independently by Zweig discovered a way to classify hadrons into schemes using group theory and quantum numbers. Zweig “ace theory” was not popular even when it was pretty similar, but it is an interesting example of how the same ideas arise in different people at the same time, and names do matter to sell your research| The eightfold way was the pavement for the establishment of quark model, and the rise of the QCD as a gauge theory beyond the S-matrix formalism reigning in the 60s. For instance:

You can do hadron spectroscopy with particle physics! I wrote about the names of those hadron states here on TSOR, you can search for them. It is quite curious how many curious unstable particles you have. Funny fact, the omega baryon was taken as inspiration for some Star Trek episodes, like the Omega directive related to Omega particles making warp travel impossible. Any hadron is very complex. Protons at very high energies are messy stuff, you can see inside protons other quarks and strange objects. For energies similar to the proton mass, you can yet keep the 3-ball (uud) making the proton (and similarly with other baryons). BUT, the valence quark picture is only an approximation for certain scale. The quark model arising in the 1970s was deliberately precise to understand the color charge (and why states like the sss or the \Delta particle exist. Quark model is in debt to O. Greenberg parastatistics, an exotic (yet today!) topic related to quantum statistics beyond fermions and bosons.

By the other hand, strong force is so weird due to confinement. Try to ionize a proton just like you do with an atom. Well, unlike to atoms, you get a quark-antiquark state very soon. This feature is similar to particle pair creation due to Schwinger effect in strong QED/gauge theory, but you get it from “free” in QCD. Strong force is the most quantum force of the 3 interactions of the SM. The absence of free quarks is very similar to the absence of magnetic monopoles. Just note that:

    \[\dfrac{E(proton)}{(2m_u+m_d)c^2}\sim 100\]

Thus, quarks are very relativistic as well!!!!! Reamarkly, the same operation done over the hydrogen atom gives you E/(M_e+M_p)c^2 about 10^{-5}. So, unless it is excited or heavier nuclei are considered, simple atoms are not generally relativistic at the level of the binding energies. That the strong interaction (SI) is very quantum and very relativistic is a known fact. Beyond the parton model by Feynman, coding some structure functions for hadrons, there are some interesting simplified models in the description of quarks and gluons. Maybe, the most important model is that of strings. Yes, string theory was born as a strong interaction theory. Hadrons are just flux tubes of color, trapped, and wibbling and wobbling wildly. It shows that the flow of gluon lines carries out almost every the energy of the quarks and hadrons. Indeed, the 99% of matter is due to gluonic field interactions or the color flux tubes. That hadrons are just balls tied up by strings is a useful picture but not too fair today, and spin two states are just and oddity in nuclear and particle physics. However, spin two interactions are known to be those interactions caused by gravity and gravitons, so string theory transformed itself into a theory of everything, as it keeps (but strangely uncomplete) today. You can hear some discussions between theories, and some blogs about that gravity is the square of some Yang-Mills theory. Well, it is not quite precise to say it so simple, but it works in some theories and models, so keep an eye on that.

Other interesting QCD model is that of the spring (string) tension. Essentially, confinement is linear with constant tension. The tension of any hadron is essentially about 1GeV/fm=10^5N. However, if you hit a proton at about 10TeV of energy, and at about a distance of an (deci)attometer, the tension would be instead 10^{12}N or 10^{13}N. Taking into account that tension is the Young-modulus times the strain (or the tension is the pressure times the sectional surface of any material), you get that hadron tension is huge, very huge. It can be compared to that of graphene tension in a meter from stiffness, or that of steel in one cm^2, for the greatest experimentally tested values! Thus, it is not surprising that nuclei and hadrons are so stable, aren’t they? And however, the string/spring model is pretty simple explanation of all of this. You get

 

    \[F_q=\sigma=constant\;\; V_{string}=kr\]

and at shorter distances, you would get asymptotically freedom deconfinement via the potential

    \[V_d=-\dfrac{c\alpha_s}{r}+\sigma r\]

where c is some constant (generally written as 4\lambda/3) and units of V are in GeV. I would like to note that non-perturbativeness is essentially a key property of confined QCD. It yields exponential terms giving rise to particle pair creations similarly to Schwinger effect, via

    \[A=\exp\left(\dfrac{-m_\pi^2}{\sigma}\right)\]

up to unit conversion constants!

Well, quak-gluon plasma has a temperature about 10^{12}K. Take the QCD main typical energy, the chiral vaccum of QCD has energy about 100 MeV, this is about 10^{-11} joules. Use the Boltzmann constant to turn this energy into temperature of fundamental quark-qluon melting, and you get that trillion (american, billion european) of degrees. Similar estimates can be done for the normal electric plasma (to get millions of degrees, easy if you consider that typical atomic energies are about 1 eV), or even more, you can guess the ultimate hot temperature, the Planck temperature from this kind of arguments, about 10^{32}K. What are the properties of QGP? A simple list:

  • It behaves like an almost perfect fluid (without friction!).
  • It is not gluon transparent.
  • It has a complicated phase diagram, much more complicated and subtle that was initially expected.
  • QGP was the main composition of the Universe when the Universe was about between one picosecond up to 1 or 10 microseconds,…Then, protons and neutrons and other hadrons were confined.

As an image is much better, let me show you some representations of the QCD phase diagram (similar to the water phases you study at school):

Let me remember you a thing, the strong coupling “constant” is about 0.118 at the LHC energies, and vacuum Feynman graphs should be counted with care as we have 3 colors and a non-abelian (non-commutative) gauge theory! However, beyond some interesting properties, there is a general framework for the full SM and now we will its full power…

Part(IV). The power of the SM.

The SM lagrangian is, formally simplified, a sum:

    \[L_{SM}=L_m+L_g+L_{int}+L_{H}=i\overline{\Psi}\gamma\cdot D\Psi+F^2+G^2+g_Y\overline{\Psi}\Psi\phi+\vert D\phi\vert^2-V_\phi\]

The simples Higgs potential dynamics is encoded via

    \[V_\phi=-m^2\phi^2+\lambda\phi^4\]

Further interaction terms, more complicated, are allowed in BSM theories, but the SM is just a g\phi^3+\lambda\phi^4 from a perturbative viewpoint.

The SM does NOT contain gravity. It only codes 3 out 4 fundamental interactions. However, the framework is self-consistent (up to some technical problems we do not know how to solve yet) with the Higgs mechanism (SSB plus the Higgs potential). Interactions?

  1. Strong force by 8 gluons.
  2. Weak force by W and Z bosons.
  3. Electromagnetic force by photons.
  4. Mass giver to elementary particles with Higgs and interaction terms by Yukawa-like interactions not coming from any symmetry but from the own Higgs mechanism and dynamics.

Gravity is a force apart from the SM, even when you can in principle calculate graviton scattering processes. Taking into account loop corrections is a nightmare with gravity. Feynman graphs blow up in number and you can not control the destiny of the nasty infinite terms spoiling renormalizability. Thus, we are hoping a new BSM theory will help us to solve this. QG as superstrings/M-theory or LQG were designed to live with these problems better. Today, they have helped to some theoretical fundamentally mathematical details, but we lack stringy/loopy experimental support. We have, however, three mysterious generations, the 6 quarks and 6 leptons, 4 gauge bosons and the Higgs-like boson at 125 GeV saying us that something else is required, but we do not know how or what is it. This post is already showing you:

  • There are known knowns.
  • There are known unknowns.
  • There are unknown unknowns, likely, out there, hidden in the noise of our current data.

How does the LHC work? Take water, tons of water, and spoils the electrons with electric fields. Electrolysis apart, you need high frequency electric fields. Then inject particles with frequencies between 40MHz to 10kHz. Create tubes of low temperature (1.9K or about 2K) at the LHC main tubes. Get superconductivity with about 8.3T magnetic fields. From ionized hydrogen atoms, got about bunches of protons (10^{11}p per bunch. Take about groups of 3000 packets and insert them into a 26 km long collider by sucessive injections. At 40 MHz, you could show that protons circle the full LHC about 11245 times per second. Then, build up some cool detectors (ATLAS, CMS, ALICE and LHCb are their names, plus some new detectors under construction to test other fantastic theories). Then, build up some cool wonderful informatical system and architecture, such as you can pile up data with detector times about 3 microseconds. Note than you can even “detect” particles like the tau with about 300 femtoseconds of timelife, response times of 25 ns and even more, you can indeed hint the existence of very short-lived resonances like the Higgs, the top quark and other known particles with lifetimes of 0.1 yoctoseconds!!!!! But you see at energies, not time detector responses. Essentially, any particle collider is a time-series for the energy-mass-frequency and the number of events, after good statistical and experimental analysis. That is.

If you think all this is useless, let me talk you…About history and later about medicine. Any particle collider has transversal applications. A complex detector like ATLAS or CMS are a set of wires, electronics and aparatus with high-end applications. Of course, you can think that a 14 G€ machine is expensive. But you must think globally. The LHC has cost every european citizen only a few euros per person. Does it deserve the spending of the money? Why to make 9T magnetic fields and 15 meters wires/detectors in size? The same question arises in different time periods of human history. Nobody, for sure, predicted that GR and the gravitational correction to time measurements would be necessary to get you not lost with the GPS system, but it is true. You can not get GPS devices work properly if GR were not correct. There, at speed of 14000km/s satellites, and times of a few 7-38 microseconds, the gravitational corrections by GR are necessary to get a proper position in the sea, in any remote part of the globe. What about accelerators? First use of accelerators were, in fact, TV monitors. Did you enjoy old TV? Cathodic rays by J.J. Thomson were a basic tool for the first TV designs. Van de Graaff generators are yet common tools for showing the effects of electricity, but you also have other interesting accelerators named pelletrons and the one by Gockcrofth-Walton. There are also linear colliders (LINAC) used for alimentary uses, cyclotrons in nuclear research and medicine, and synchrotons, … Synchroton radiation is important in medicine, but also in material reseach and electron microscopy. You can erase microbes and clean materials with synchroton radiation, and what about those X-rays sources? What about radiation therapy? You have now PET and proton therapy too! You MUST know an important thing. PET arised in the LHC site previous collider, the LEP. And much of current collider technology, based on calorimetry and crystals at the detectors, is being reused for prediagnosing terrible illness. The production of radiodrugs is also made with collider-aided tools. Ionizing radiation detectors are also important pieces of technology natural in colliders that find a refurbished use in medicine. Ionizing particles are naturally charged. Radiation is counted or detected by semiconductors (diods). You can find out beyond nuclear weapons a full set of pacific uses of radiation tools: archeology, vulcanology, nuclear safety from nuclear reactors, fire detectors,…How can we detect radiation?

  1. Ionizing radiation is detected with counters (gas or semiconductors) and dosimeters using photographic thin films.
  2. Exciting radiation is detected with thermoluminiscent materials, sparking counters and other gas detectors.

Who had said to Dirac that antimatter would be useful for Positron-Electron-Tomographies when he derived his famouse equation

    \[\left[i\hbar\gamma^\mu\partial_\mu-e\gamma^\mu A_\mu-mc\right]\Psi=(i\hbar \gamma\cdot D-mc)\Psi=0\]

Medical imaging is a wonderful branch of particle physics. Beyond ecography, NMR (nuclear magnetic resonance) requires high magnetic fields, non ionizing radiation and nuclear relaxing. X-rays are ionizing (that is why you can not X-ray yourself every day), the TAC implies higher dosis but they are not too much used per year per person, and PET is important in oncology with low resolution, and fluoride-18 isotopes essentially. Neurology and cardiology are benefiting from particle physics too. Furthermore, isotopes are necessary for those devices, so we need curiously one of the most fascinating predictions of Mendeleiev table 150 years ago: technetium. Technetium sources are required in many medical nuclear resources, not only in PTAC or PET+TAC, but also in SPECT, using the metastable technetium-99 isotope. Thus, radiation therapy, historically using X-rays have evolved into using a more multiparticle setting. You can use not only limited gamma rays for some processes, you can also use electrons and proton particles for therapy. Proton therapy is new and promising, specially promising with the great precision of 220 MeV proton residuals. Old therapies based on Co-60 are known to have generated patient issues some decades after the treatment. Proton therapy machines are now in top hospitals. In principle you could use any “soft” particle for therapy. Neutrinos? Carbon atoms? The CERN is aware of all of this, and it has some multidisciplinar projects like ENLIGHT and BroLEIR about how to simulate proton radiation and the efects of proton therapy or oxygen-16 to cure your health. Gammagraphies are also a medical tool, dosis are important though. Dosis is defined as radiation energy per mass unit. And radiodrugs specifically desiged for customized treatments are on the way. So, please, neve say that particle or fundamental physics is useless. You have induction kitchens and microwaves at your cooking stations thanks to radiation studies! Nuclear transmutation is currently possible and the alchemy promises of ancient times secure your health if done properly. You can forget what a barn is (a particle physicist unit of area equal to 10^{-24}cm^2, or that the top quark or Higgs mass are about 173 and 125 GeV, but you should know particle physics does matter in your timelife. Even if you don’t know or don’t find new physics because your job is very different, the search for SUSY, Dark Matter, extra dimensions, black holes, surely will affect you collaterally. Particles colliders are not designed to produce, in the high energy physics community, a single concrete particle, but medical applications are different. Now, while reading these lines, you are being crossed over by billions of neutrini, and for some muons.

Part(V). General Relativity and the LCDM model.

  • General relativity=Equivalence principle+SR. GR=EP+SR.
  • Curvature=Energy-Momentum.
  • Gravity=Pseudoforce=Geometry.
  • There are gravitational waves.
  • Gravity is weak generally, but it can also be strong at big masses or high densities.
  • GR needs quantum gravity when the Schwarzschild radius (or gravitational size) equals to the quantum size (Compton’s wavelength). That is reached (naively) at the Planck length, 10^{-35}m. Check:

    \[2\Lambda_Q=R_s=L_P\leftrightarrow\; L_P^2=\dfrac{G\hbar}{c^3}\]

  • Spacetime tells matter-energy how to move, matter-energy tells spacetime how to curve (warp). There is no torsion in classical GR, but you can include it to get Einstein-Cartan theory with nonsymmetrical energy-momentum or Einstein tensors.
  • The large structure of the spacetime and the LCDM standard cosmological model. This is the current analogue of the SM for the largest cosmic structures we have today. It predicts lot of things, and explain current data as fair as the SM.
  • We need dark matter (or MOND/MOG) to explain galactic rotation curves and elliptical galaxies dispersion speed:

    \[V^4=G^2M\Sigma=G^2M^2R^2\]

or

    \[V^4=2GMa_0\]

with \Sigma=M/R^2, a_0=G\Sigma/2=GM/2R^2.

  • The Universe is expanding with H=0\sim 70km/s/Mpc at large structures. There are some divergences between the value of the Hubble parameter H_0 at current time. But average, it is about 70 in conventional units (km/s/Mpc).
  • The Big Bang model: the cosmic microwave background and its anisotropies confirm the LCDM previsions.
  • Current CMB temperature is about 2.73K up to some anisotropies in the sky. It is also expected a cosmic neutrino background at about 1.945K or less (depending on the extra non-SM neutrini and other BSM physics). The relic graviton background is also expected at about 0.9 K or lesser. T_{CgB}\sim\sqrt{2/N}T_{CMB}. You get the 0.9 K counting the particle species degrees of freedom of the SM. If there were additional particles, the cosmic graviton background would be lesser than 0.9K.
  • The Universe density is close to the critical density, about 1proton per cubic centimeter, or about 1 electron per litre. Primordial fluctuations were the seeds of current irregularities given by galaxies and other cosmic structures.
  • The universe is flat (euclidean) at cosmic scales (despite being intrisically curved spacetime). This recalls for inflation.
  • Dark matter, if real, should create a wind or flux with speed of 300km/s.
  • The main evidence of cold dark matter comes from flat rotation curves in spiral galaxies and the dispersion of speed in elliptic galaxies. Zwicky, however, found evidence of this in 1933 (more mass was required to explain galactic motions), and Vera Rubin confirmed it in 1975. There are evidence of DM due to gravitational lensing observations or the Bullet Cluster.
  • Galaxies are “flat” due to interactions with matter and gravity.  The role of DM in galaxy formation and evolution is yet a hot topic of research.
  • Simulations of the Universe with DM and or Dark Energy are consistent with the LCMB paradigm. However, it does not imply that models without dark matter can not exist. However, DM evidence is from different sources, not only a single one. Simulations have the power to discriminate some models.
  • Dark matter particles is collisionless and do not cluster, but it forms haloes. Halo dynamics is however poorly understood. It is believed to be spherically symmetric, but we do not know for sure.
  • Supernovae type Ia measure the Hubble constant and hint that the Universe is accelerating, not decelerating.
  • Dark energy or the cosmological constant remains a big puzzle even today. What is it? Quintessence? Phantom energy? The mere vacuum energy? 70% dark energy, 25% dark matter, 5% normal matter. We are a mere 5% in the Universe. DM are likely some neutral particles not in the SM (one or several types!). We have some ideas of what DM is but no prove of their existence by direct production has been managed till now.
  • GW plan to measure the Hubble parameter with precision in order to spoil some tensions in the current data. H_0=\dfrac{\dot{a}}{a} at current time t=0.
  • Friedmann equations. For homegeneous and isotropic Universes, like ours, the GR field equations can be recasted into two simple equations, called Friedmann equations:

    \[\left(\dfrac{\dot{a}}{a}\right)^2=\dfrac{8\pi G\rho}{3}+\dfrac{\Lambda c^2}{3}-\dfrac{\kappa c^2}{a^2}\]

    \[\dfrac{\ddot{a}}{a}=-\dfrac{4\pi G}{3}\left(\rho+\dfrac{3P}{c^2}\right)+\dfrac{\Lambda c^2}{3}\]

  • Known issues: singularities, nature of dark matter and dark energy, the finding of the cosmic neutrino background and the relic cosmic graviton background, the finding of the stochastic gravitational wave background, the finding of the inflation signatures of the universe, the test of the multiverse ideas (possible?)., the final fate of the observable universe (will protons decay? will the space-time disappear?what will black holes leave after they evaporate?).

After the two modern revolutions of the 20th century, physicists are yet stunned by their precision. Relativity (both special and general) and Quantum Physics (in its ultimate current form, the Standard Modern) rule with surprisingly accuracy and precision in the realm of experimental physics.

2 pillars, ying and yang of physics, relativity (general relativity) reigns into the macroworld, while the quantum mechanics (the Standard Model) remain unbreakable at the microworld. Where do we stand in our search for the ultimate theory? Let me trip with you into an overview of what we do we know (more or less) that is true at both extreme theories and scales.

What is time? What is space? What is mass or energy? What are the fundamental forces? These questions, even when translated into the quantum are classical as well. In fact, it was the genius of Einstein and many other great scientists what operationally say what they are (more or less, since our scientific knowledge is provisional).

Special relativity (SR) was build in order to unify the laws of mechanics and the (galilean) principle of relativity with those laws of electromagnetism (specially, the symmetry of Maxwell equations). What is motion? The change of position in time with respect to some reference frame. What is time? Just a parameter or coordinate in a four-dimensional set-up. It shows up the the marriage of classical mechanics and electromagnetism (in 4d) can be done, saving the relativity principle up to a cost. In unidimensional time relativity, 3-speed (4-speed) are limited to signals lesser or equal to the speed of light. In any space-time diagram, you get a cone when propagating at the speed of light. Light signals relate space with time, and light also relates mass with energy:

    \[X=ct\]

    \[E_0=mc^2\]

However, this fact also implies that Newton’s gravity can not be right. Reason? Gravity propagates instantaneously in newtonian gravity! That is forbidden in special relativity. Einstein realized this, and he had to struggle with superior mathematics to find out a theory locally consistent with special relativity containing gravity. This theory, a locally special relativistic theory of graviy is what general relativity (GR) is. However, the theory proved itself to be greater than his own creator and inventor could ever imagined at his lifetime. If you plug G_N=0 or c=\infty in GR, basically (there are some nasty stuff like we saw in come recent log), you recover SR from GR. GR is a theory that models spacetime with a metric field. A metric field is a matrix, usually symmetric (I will be only considering as is usual gravity without torsion here, so the Einstein tensor remains symmetric). Flat spacetime is just the normal Minkovski metric (a diagonal matrix!):

    \[ds^2=c^2dt^2-dx^2-dy^2-dz^2\]

up to a global sign convention. This GR theory is fascinating. It explains gravity as a curvature of spacetime. In fact, the deviation of the circle usual length by curvature is due to gravity itself:

    \[\dfrac{Perimeter}{Diameter}-\pi\sim \dfrac{G_NM}{c^2r}\]

Gravity, being a force, it is really a pseudoforce. Why do we want quantum gravity? It is not only why we want something else that calculate the cross-section of 2 gravitons transforming into 2 photons (Skovelev):

(1)   \begin{equation*} \sigma(GG\rightarrow\gamma\gamma)=\dfrac{k^4\omega^2}{160\pi}=\dfrac{\pi d_S^2}{10}\end{equation*}

For the electron at rest, it would be very tiny, \sigma\approx 10^{-110}cm^2.

Exercise: derive the above cross-section (be aware of identical particles effects) from

    \[\dfrac{d\sigma}{d\cos\theta}=\dfrac{k^4\omega^2}{64\pi}\left(\cos^8(\theta/2)+\sin^8(\theta/2)\right)\]

Exercise (II): check the graviton-photon to graviton-photon formula and tell what is the main problem it has compared to the previous formula.

    \[\sigma(G\gamma\rightarrow G\gamma)=\dfrac{k^4\omega^2}{64\pi}\dfrac{1+\cos^4(\theta/2)}{\sin^4(\theta/2)}\]

It is not only that strong fields allow particle pair creation rates, e.g., via the Schwinger effect

    \[\Gamma_S=\dfrac{e^2E^2}{(2\pi)^3}\sum_{n=1}^\infty e^{-\frac{\pi m^2c^4}{e\hbar E}}\]

GR is uncomplete. However, GR is a wonderful theory. As SR. SR says:

    \[\mbox{Proper time}=(\mbox{Time})^2-(\mbox{distance})^2\]

using units in which c=1. The time measured by a rest clock is equal to certain combination of the time of the clock in motion minus the distance, using something similar to euclidean triangles. They are indeed hyperbolic triangles. GR says space-time is elastic and dynamical. And the equivalence principle, in any of its forms, say in the end that gravity is just curvature or geometry. The shape of space in time implies that space-time grows somehow. It expands (despite the static preconception of the theory when Einstein created it, it soon proved to predict the Universe as something moving itself!). The Einstein Field theory equations are pretty simple:

    \[G_{\mu\nu}+\Lambda g_{\mu \nu}=\dfrac{8\pi G}{c^4}T_{\mu\nu}\]

The vacuum density energy is yet a mystery, so that is one of the reasons of why to go beyond GR:

    \[\rho_\Lambda=\dfrac{\Lambda c^4}{8\pi G_N}\]

Essentially, its value today is about a proton per cubic meter (do the numbers yourself!), and it is very similar (coincidence problem!) to matter energy density or dark matter energy density today. But it was not so in the past! The cosmological constant can be interpreted as a Lagrange multiplier, a volume pressure term or a negative contraterm in the EFE. Nobody knows why it has the value it has today. Curvature of spacetime has a temporal component. The purely temporal components of the curvature trigger the newtonian potential at weak gravitational fields. It also implies the existence of gravitational fields/waves traveling at the speed of light…Or, is the light the one who travels at the maximal possible speed in local spacetime? The stiffness of spacetime is inversely proportional to G_N. It is pretty big, so you need big masses or densities in order to make curved spacetime effects to appear clearly. GW were so weak, we had to search for them into the noise on Earth (in space the story is completely different, but this is the subject of a future blog post!). Just like the LHC probes zeptometric scales, GR probes bigger scales. The SM has a precision of 1 part into 10^{10} or so. GR has similar values of precision in some observables. Gravity bends spacetime and triangles are, in principle, curved (it yields that the largest spacetime structures of the Universe are euclidean though! This flatness can be solved hardly using inflationary ideas). What else? Einstein theory of gravity, GR, predicts as well:

  • Precession of orbits. Abandon your keplerian world. Ellipses precess.  The critical case in the solar system is the famous Mercury orbit. Was the genius of Einstein realize that his theory could explain Mercury without dark matter, Vulcan.
  • Gravitational time delay. Closer you are to any heavy mass, slower is your time flow.
  • Gravitational lensing of light. Eddington checked this a century ago, in 1919. Einstein was already a celebrity, but this confirmation of his GR theory elevated him to the level of God of Physics.
  • The existence of vacuum solutions we call black holes. Even when other people had speculated about black stars before in newtonian gravity, GR contains naturally solutions with features or darkness. We do not understand their ultimate destiny. That is another reason why GR is not complete.
  • Deviations from euclidean geometry is measured with angular measurements:

    \[\alpha+\beta+\gamma-\pi\sim \dfrac{GM}{c^2r}\]

  • The universe is expanding, and it likely had a beginning in time.
  • The universe has a vacuum energy that is not null (it is a 20 years old rediscovery of Einstein’s cosmological constant).
  • Gravitational waves exist (radically new astronomy by LIGO is going on at these moments). The era of multimessenger astronomy is just beginning.
  • Black holes are real things. Recently, we picked up the photo of M87 and SgA*, our galactic BH, is being analyzed right now.

Let me show you some GW formulae:

  • Gravitational luminosity formula reads

    \[L_{GW}=-\left(\dfrac{dE}{dt}\right)_{GW}=-\dfrac{G}{5c^5}\langle\dfrac{\partial^3Q}{\partial t^3}\rangle\]

such as, for a binary circular system

    \[L_{GW}=\dfrac{32G}{5c^5}\mu^2a^4\Omega^6=\dfrac{32G^4\mu^2M^3}{5c^5}\]

for M=M_1+M_2 and \mu=M_1M_2/M.

  • Temporal radius reduction due to GW emission and the coalescence time are given by:

    \[\dot{a}=-\dfrac{64G^3}{5c^5}\dfrac{\mu M^2}{a^3}\]

    \[\tau_c=\dfrac{5}{256}\dfrac{c^5a_0^4}{G^3\mu M^4}\]

Is the LCDM model fair? Yes:

  • It predicts all the GR classical tests, as it is basend on it.
  • It predicts the observed expansion of the Universe.
  • The Universe is spatially homogeneous and isotropic. The perfect cosmological principle is wrong. Universe is not aethernal. A restricted cosmological principle holds, though, today. We are not special. At large scales, the Universe look the same everywhere.
  • Scale ractor measures expansion via R(t)=a(t)R. GREFE for LCDM reduce to the Friedmann equations.
  • Critical density is close to the cosmic density, \rho_c=3H_0^2/8\pi G. Knowing H_0 allows you to measure the age and size (up to scale factor) of the Universe. Knowing H_0 and G allows you to compute the critical density for the Universe to collapse. We are close to that value, but accelerating Universe is diluting galaxies into vacuum. To calculate the critical density, take the Hubble law v=HR and equal kinetic energy to potential gravitational energy:

    \[\dfrac{1}{2}M_UH^2_0R^2=\dfrac{4}{3}\pi \rho G R^2\]

and then \rho_c follows straightforward from elementary algebra.

  • Some parts of the cosmic history are yet sorrounded by mystery and unknowns. We have to live with ignorance there. Planck era or inflation era are  yet hard to test. We believe we understand the QCD era, more or less, but not with absolute safety.
  • Light element abundance is another great prediction of LCDM. Indeed, it fits nicely to observations. We have to check some early Universe young stars here, the famous population III. The James Webb Space Telescope (JWST) will be looking at Pop III stars and it will show us wonderful things for sure. Current universe is old, stars are second or third generation stars, like our sun (3rd generation star).
  • GW are there. We will see the Universe BH and other fantastic GW sources invisible for light with these tools. GW evidence before LIGO discovery 3 years ago was found in pulsars (1993 Nobel Prize).

The Big Bang happened not in a single place, but everywhere, in a single moment of time, 13.7Gyr ago. There is no centre of the Universe. We see the past of the Universe during the sky night. During the first seconds, the Universe created radiation, then arised the first particles via QCD decoupling, later it created nuclei and elements (primordial nucleosynthesis), and, finally, we got astronomically/astrophysically bound objects like galaxies, clusters,…Evidences from the Big Bang are the elements that form everything, radiation from the CMB and its anisotropies, structure formation…The nuclei areised after the 3 first minutes, and in about 380000 years, recombination was possible and the light creating the elements and the CMB were produced. Inflation is required to explain some puzzles (flatness and anisotropies the main two, but there are other problems difficult without inflation). Inflation naturally requires scalar fields (or similar) exponentially inflating the universe. The matter-antimatter asymmetry is responsible of being us here. Neutrinos can keep likely part of that dark mystery, or even the dark matter partial solution. BH naturally can violate the conservation of baryon number and thus trigger proton decays. Planck mass naturally gives a lower theoretical bound of 10^{45} years (or even longer with care, about 10^{140} yrs) for proton lifetime from BH virtual fluctuations and/or spacetime foam models.

That is all…folks. The end? Choose a final death for our Universe:

  1. Big Freeze (No Freezer will save you).
  2. Big Crunch (No piston will save you).
  3. Big Rip (No gravity will save you).
  4. Little Rip (No soft gravity will save you).
  5. Big Decay (Higgs field unstability/metastability: no force will save you).

See you in other blog post soon!

LOG#216. Asian length units: the list.

Asian units for length: a non-exhaustive list.

1 Chi (China)=\dfrac{1}{3} m=33\dfrac{1}{3} cm

1 Chi (Hong-Kong)=14\dfrac{5}{8}=0.371475 m

1 Chi (Taiwan)=1 shaku (Japan)=\dfrac{10}{33}=0.3030 m

1 chek =0.371475 m

1 tsun =0.1 chek (Hong-Kong)

1 tsun =3\dfrac{1}{3} cm (Taiwan, China)

1 fan= 0.1 tsun

1 shaku (Japan, korean “ja”)=\dfrac{10}{33} m

1 ken (Japan)=1 hiro=6 shaku=\dfrac{60}{33}=1\dfrac{9}{11}\approx 1.818 meters

1 jo (Japan)=10 shaku=\dfrac{100}{33} meters

1 cho (Japan)=360 shaku=\dfrac{3600}{33}\approx 109.1 meters

1 ri (Japan)=12960 skaku=\dfrac{129600}{33}\approx 3927 meters

1 sun (Japan)=10^{-1}shaku=\dfrac{1000}{33} mm

1 bu (Japan)=10^{-2}shaku=\dfrac{1}{330}\approx 3.030 mm

1 rin (Japan)=10^{-3}shaku=\dfrac{1}{3300}m\approx 0.3030 mm

1 mo (Japan)=10^{-4}shaku=\dfrac{1}{33000}m\approx 0.03030 mm

10 chi = 17 hang

Enjoy!

LOG#215. Entanglement is the key?

Hi everyone!

Is entanglement the key? A tribute to Ant-Man and Hawking today as the preamble, Quantum Chess playing for heroes like you:

Entanglement is the subject we have today. Entanglement is that spooky weird feature of the quantum realm that stunned Einstein and realist scientists believing that reality is a preexistent “thing/stuff/entity”. Going more precise, entangled states are related to quantum states that are the product of complex systems made of parts or quantum states being composed from states of single systems. Let me introduce a little bit terminology:

  • Pure states.
  • Mixed states.
  • Separable states.
  • Entangled (non-separable!) states.

Entanglement is related to the above 4 types os states. Quantum Mechanics as we know it today is based on some basic axioms:

  1. Superposition (linearity) of quantum states.
  2. Heisenberg uncertainty principle (HUP).
  3. Unitarity.
  4. Projection postulate.
  5. Quantum composite systems or states can be made up from tensor products of single systems. What a tensor product is? It is a way to create some matrix states from two or more ingle matrices. It is not the only way, but it is the one that works.

Take for instante N=2 (two party, two subsystems creating a big one system). The Hilbert space of the composite two-party quantum system is made from the tensor product of H=H_A\otimes H_B, i.e., the Hilbert space is the tensor product of the two sybsystem Hilbert spaces. Then, the quantum states of the composite system are given by:

    \[\vert\Psi>_{AB}=\vert AB>=\sum c_{ij}\vert i>_A\vert j>_B\]

Then, it is that a state is separable IFF you can find out vector \vert c_i>_A and \vert c_j>_B such as c_{ij}=c_i(A)c_j(B) in the previous expansion. That is, if you can factorize the state as a single product of the two single systems quantum states the state is separable, otherwise the state is ENTANGLED. You can generalize the above definition to any number of systems (parties!). The general n-party quantum state is defined as certain tensor product of the subsystem quantum states as follows:

    \[H=\bigotimes_{i=1}^n H_i\]

(1)   \begin{equation*} \vert A_1\;A_2\;\cdots A_n>=\sum c_{i_1i_2\cdots i_n}\vert i_1>_{A_1}\vert i_2>_{A_2}\cdots \vert i_n>_{A_n}\end{equation*}

That’s entanglement!!!! You would say, then, why is it “hard”? Well, there are several reasons why entanglement is hard and why entanglement does matter  A LOT in QM affairs. Let me start for the first item. Why entanglement is hard? A list:

  • Entanglement is a subtle non-separability meaning certain non-locality compatible with special relativity. Yes! It is true. Entangled states have certain abilities that allow you to do magic at very large distances but causality and finite propagation of signals are not violated.
  • Bell’s theorem (more on this later). Bell found out that the existence of entangled state in QM allows you to test the existence of hidden variable theories. It yields that QM holds superb. Unchallenged. Bell experiment kills any hope for local realist theories. You need a very special type of theories if you can mimic QM results of Bell-type experiments. They need to be contextual. Reality is not independent from the way we measure it, and indeed, there are systems with act as if they were not independent from their parts even when separated to km of distance. Chinese people have indeed build up a satellite using entanglement to secure communication.
  • Currently, the EPR (Einstein-Podolski-Rosen) experiment, the type of experiment Bell inded realized has been focused by quantum gravity theorists due to the black hole information problem and the nature of gravity. Van Ramsdook proposes that gravity “is” entanglement, and Susskind and collaborators are developing an idea summarized in the formal equation ER=EPR. ER is Einstein Rosen bridge in General Relativity. ER=EPR states that quantum entanglement is caused by two (or more) quantum particles being connected by (micro)wormholes (Einstein-Rosen bridges!). That quantum entanglement could be caused by non-simple connected quantum microwormholes is just quite an statement. Hard to experimentally test. Van Ramsdook indeed suggests the gravity itself is caused by entanglement.

The relationship between gravity (“classicality”) and entanglement is an old friend. In fact, there is another point where this idea arises, but I am not sure my readers will know it. Some time ago, Rigolin’s proved that a high number of entangled particles can beat the Heisenberg Uncertainty Principle bound. Even more, he conjectured that in the limit of an infinite number of entangled particles, you get “classical” zero dispersion. That is, with an infinite number of entangled particles, you could in principle ban the uncertainty relationship. From this viewpoint, (the amount of) entanglement REDUCES uncertainty. Reciprocally, separability enlarges uncertainty. You can read the Rigolin original work here http://cds.cern.ch/record/499980/files/0105057.pdf. Wait, what if you modify the HUP by some generalized form of it like EHUP, GUP or EGUP? Logical thoughts impose here: EHUP and EGUP or GUP make the system more quantum and less classical, enhancing the bounds reacting against the reduction of uncertainty of very large number of entanglement particles. GUP, EHUP and EGUP have the opposite effect to entanglement and make more uncertain the entangled states. See about this here https://arxiv.org/pdf/1706.10013.pdf

You can also read that noncommutativeness (as a bonus) makes entanglement and nonclassicality more evident in the paper: https://arxiv.org/pdf/1506.08901.pdf 

And now? We return to some vocabulary! N-level pure states are defined formally as quantum states

    \[\vert \Psi>=\sum_{i=0}^{N-1}c_i\vert i>\]

Thus, pure states are simple linelar superpositions of quantum states! You can bet qubits with N=2, qutrits with N=3, and qu\inftyits with N=\infty (quantum fields!). Even more, you could add a continuous term as well and spoil the finite term sum. Of course, entanglement of infinite dimensional systems is not usual in standard discussions of quantum computing, but it can be added without generality loss. What about mixed states? Well, we need a new gadget to explain mixed states. This new device is the density matrix. For pure states, the density matrix is a set with copies of the N-level system. For pure states the density matrix reads

    \[\rho=\sum_i w_i\vert i><i\vert\]

where \sum w_i=1 by probability conservation. Now, take a N=2 party system. If separable, then you can write by definition the density matrix as the following tensor product:

    \[\rho = w_i\left[\overline{c}_{ij}c_{ij}\vert ij>(A) (A)<ij\vert \vert ij>(B) (B)<ij\vert\right]=\sum_i w_i\rho_i (A)\otimes\rho_i (B)\]

and where \sum_j\vert c_{ij}\vert^2=1 and we can generalize this to N-party systems as

(2)   \begin{equation*}\rho=\sum_i\omega_i\rho_{i_1}^{A_1}\cdots\rho_{i_n}^{A_n}\end{equation*}

for separable states with

    \[\sum_j\vert c_{ij}\vert^2=\sum_i\omega_i=1\]

by probability conservation once again.

Next step is to define the so-called reduced density matrix. It is a density matrix created from the big one tracing over a simple or more subsystems. For a single reduction:

    \[\rho_T=\vert\Psi><\Psi\vert\]

and for the reduced density matrix tracing over A (N=2 party case) you get

    \[\rho_A=\sum_j <j\vert_B\left(\vert\Psi><\Psi\vert\right)\vert j>_B=\mbox{Tr}_B\rho_T\]

and similarly you could get the reduced density matrix tracing by A states.

Entanglement example 1. Bell states.

Take N=2, two level system. H_A=\left[\vert 0>_A,\vert 1>_A\right] is the A basis and H_B=\left[\vert 0>_B,\vert 1>_B\right] the basis for quantum states of the B system. For the composite system, tensor product, you can find out 4 interesting Bell states that are entangled and can not be decomposed into single products of basis states. They are:

(3)   \begin{equation*}\vert BELL>_1=\dfrac{1}{\sqrt{2}}\left[\vert 0>_A\vert 0>_B+\vert 1>_A\vert 1>_B\right]\end{equation*}

(4)   \begin{equation*}\vert BELL>_2=\dfrac{1}{\sqrt{2}}\left[\vert 0>_A\vert 0>_B-\vert 1>_A\vert 1>_B\right]\end{equation*}

(5)   \begin{equation*}\vert BELL>_3=\dfrac{1}{\sqrt{2}}\left[\vert 0>_A\vert 1>_B+\vert 1>_A\vert 0>_B\right]\end{equation*}

(6)   \begin{equation*}\vert BELL>_4=\dfrac{1}{\sqrt{2}}\left[\vert 0>_A\vert 1>_B-\vert 1>_A\vert 0>_B\right]\end{equation*}

They are indeed special in a sense. They are maximally entangled states, i.e., they are the states with the greatest degree of entanglement possible within the composite system.

Entanglement example 2. Bell 4 reduced density matrix.

Take the 4th Bell state:

(7)   \begin{equation*}\vert BELL>_4=\dfrac{1}{\sqrt{2}}\left[\vert 0>_A\vert 1>_B-\vert 1>_A\vert 0>_B\right]\end{equation*}

Trace over the B subsystem:

    \[\rho_A=\mbox{Tr}_B\rho_T(\Psi)=\dfrac{1}{2}\left(\vert 0>_A<0\vert_A+\vert 1>_A<1\vert_A\right)\]

Then, you see that the reduced density matrix for entangled pure ensemble IS a mixed ensemble or state. This result is general, in bipartite systems, \rho is entangled iff the reduced states are mixed rather than pure!

Entanglement example 3. Other entangled states.

For M>2 parties, with two levels, there is a very interesting generalization of Bell states. It is called the GHZ state:

(8)   \begin{equation*}\vert GHZ>=\dfrac{1}{\sqrt{2}}\left(\vert 0>^{\otimes M}+\vert 1>^{\otimes M}\right)\end{equation*}

There are also the so called spin squeezed states, a special set or type of squeezed coherent states. They are important in optics. For 2 bosonic modes, there is the NOON state:

(9)   \begin{equation*}\vert NOON>=\dfrac{\vert N>_A\vert 0>_B+\vert 0>_A\vert N>_B}{\sqrt{2}}\end{equation*}

This is similar to Bell states excepting that the instead the 0,1 kets you have N,0 kets. That is, you have N-excited or N-photons in one mode and 0 photons in the other mode. Well, it shows that Bell states, GHZ states and NOON states are maximally entangled. However, there are other non maximally entangled states. For instance, the previously mentioned spin squeezed states or the twin Fock states. NOON states can also be “phased”, such as you build up a modulated NOON as

    \[\vert NOON>=\dfrac{\vert N>_A\vert 0>_B+e^{iN\theta}\vert 0>_A\vert N>_B}{\sqrt{2}}\]

This state represents the superposition of N-particles in a mode A, with 0-particles in the mode B, and viceversa, shifted by a phase factor. NOON states are useful objects in quantum metrology since they are capable to make precision phase measurements in optical interferometers. Build up the observable A NOON as follows:

    \[A=\vert N,O><O,N\vert+\vert O,N><N,O\vert\]

Then, you can easily prove that the expectation value of A in a NOON state switches between +1 and -1 if phase changes from 0 to \pi/N. Moreover, the error in the phase measurement IS inded

    \[\delta \theta=\dfrac{\delta A}{\vert \dfrac{d<A>}{d\theta}\vert}=\dfrac{1}{N}\]

This is the so-called Heisenberg limit, in fact an improvement over the standard quantum limit (SQL) given by

    \[\delta_{SQL}=\sqrt{\dfrac{\hbar \theta}{M}}\]

The simplest non-Bell GHZ state is made with M=3 parties. GHZ states are used in very important applications:

  • Quantum communication protocols.
  • Quantum cryptography protocols.
  • Secret key sharing.

There is no standard measurement, in a standard way, of multipartite entanglement because, as we saw, there are different types of multipartite entanglement. Indeed, entanglement is not generally mutually convertible. The GHZ state is maximally entangled. For M=3, take

    \[\vert GHZ^3>=\dfrac{\vert 000>+\vert 111>}{\sqrt{2}}\]

    \[\rho_3=\mbox{Tr}_3\left(\dfrac{\vert 000>+\vert 111>}{\sqrt{2}}\right)\left(\dfrac{< 000\vert+< 111\vert}{\sqrt{2}}\right)\]

so you get an unentangled mixed state:

    \[\rho_3=\left(\dfrac{\vert 00><00\vert+\vert 11><11\vert}{2}\right)\]

Thus, this GHZ state has certain 2-particle quantum correlations but there are of “a classical nature” somehow. GHZ leads to striking non-classical correlations too. They allow you to test the internal inconsitencies of the EPR elements of reality. The generalized GHZ state for d-levels is given by the state

(10)   \begin{equation*}\boxed{\vert GHZ^d>=\dfrac{1}{\sqrt{d}}\sum_{j=0}^{d-1}\vert j>^{\otimes q}}\end{equation*}

Maybe you want to experiment with quantum states and quantum entanglement. There is a MATLAB toolbox for exploring quantum entanglement theory. It is called QETLAB. I do not checked, and I am sure there are other similar toys and apps out there. Let me know any way!

Entanglement example 4. W-states.

There is a 3 qubit interesting entangled quantum state called W-state. It is interesting for storage of quantum memories. It reads:

    \[\vert W>=\dfrac{1}{\sqrt{3}}\left(\vert 001>+\vert 010>+\vert 100>\right)\]

For N-qubits the W-state is

    \[\vert W>_N=\dfrac{1}{\sqrt{N}}\left(\vert 0\cdots 1>+\cdots+\vert 1\cdots 0>\right)\]

The W-state is just a linear quantum superposition of all possible pure states with exactly one excited state and the others being in the ground state, weighted with the same probability.

Multipartite entanglement is much more complicated. M>2 entanglement is richer in possibilities than M=2 entanglement. With M=2 there are fully entangled (maximally entangled) and fully separable states. However, things go wild in M>2 parties. You can also have partially separable or partially entangled states. The full M-partite separability

    \[\rho_{A_1\cdots A_M}=\sum_i P_i\rho^i_{A_1}\otimes \cdots \otimes \rho^i_{A_M}\]

is fully entangled when written in this way. But there are also pure states

    \[\vert A_1\cdots A_M>=\vert A_1>\otimes\cdots\vert A_M>\]

and, partially entangled states beyond the fully (maximally) entangled states.

There are other cool measurements of entanglement related to states. They have weird names like tangles or hyperdeterminants! However, in the end, all are expressed in term of pure or mixed states, with certain amount of partial or maximal (or null) entanglement!

What else? Beyond Bell, GHZ, W, NOON and similar states, there are many interesting topics related to entanglement these days. These subjects include:

  • Going from multipartite to bipartite entanglement.
  • Entropy bounds related to entanglement and entanglement entropy.
  • Quantum channels and quantum channel capacities.
  • LOCC=Local Operations and Classical Communication observables.
  • Entanglement distillation (yet, you can distillate entanglement, we have seen an example before!).
  • Quantum teleportation (it is not just like beam me up, Scotty, but it rocks).
  • Quantum cryptography and quantum communication, quantum key sharing.
  • Quantum game theory.
  • Black hole information paradoxes.
  • The EPR=ER and Gravity=Entanglement ideas.
  • Hyperentanglement, i.e., the simultaneous entanglement between multiple degrees of freedom of 2 or more entangled systems.

In summary, quantum entanglement is a fascinating topic. I am sure many of you knew many of these things before. Perhaps, Rigolin’s works and the research involving how to spoil or enhance uncertainty from entanglement and or noncommutativity (GUP,EHUP,EGUP) is the strangest topic I discussed here today. Did you enjoy it? I hope so!

Challenge question: could entanglement affect to gravity and then to time/space measurements? How could you know if your time or space is entangled to mine or to the local time/space measurements in other parts of the observable Universe?

See you in another blog post!!!!!

LOG#214. Supertranslations.


Surprise! Twice in a day!

Yes, I have much to tell yet, alive…Beyond the holographic principle, and things like gravity is Yang-Mills squared or that open strings squared are closed strings, we have had other important development in the theory of black holes and theoretical physics these years, even when Hawking passed away… Let me say I am not going to surprise too much, since many of the readers I am sure they know what I am going to say after reading the title of this entry.

    \[Gravity=YM^2\]

    \[Closed=Open^2\]

Firstly, let me write a short chronological note:

1965. Weinberg writes a paper on infrared (IR) soft photons (and gravitons). Soft particles are zero energy particles.

1985. Braginsky and Thorne write a paper on gravitational wave bursts with memory. They conjectured it could arise from some collisions of stars and black holes. 1965=1985. What the hell is going on here?

1985. How gravity and other forces act on large scales is temporally abandoned. Lacking of experimental tests. You know.

2014. REsemblance of 1965=1985 works are highlighted by Strominger et alii. Hot point is that of how gravity and other long-range forces out there act on large scales and affect the black hole information paradox! What the hell? Almost 50 years again, reboot! 1965 used “p”, 1985 used “P”. Strominger et al. used 1411.5745 stuff (arxived!).

History track details: in 1962. Bondi, Van der Burg, Mertzner and Sachs (the latter independently), realized that DIFFERENT observers at constant speed would disagree on the limit GR\rightarrow SR. Wait, wait,…How? Yes, you are reading it well. Different observers can observe different SR limits at constant speed when you go far away from the source. Therefore, the limit of flat spacetime from curved spacetime is not straightforward at all! The usual mantra from textbooks that you get special relativity locally from general relativity is subtle. GR textbooks say that GR reduces to SR in the weak field approximation, when you go far away from the source and locally your spacetime is “flat”. However, the floppy spacetime continuum makes this statement vague. Indeed, it is not accurate enough. Spacetime continuum is indeed a fluid-like or crystal and rigid thing. We would naively expect that what happens in our solar system is independent of the rest of the galaxy or the local group! Gravity diminishes at large distances, and planets or stars, therefore, are independent one to another and from distant enough objects! Well, it seems this is an oversimplification. Compère and collaborators, focusing on the crystal analogy, effectively “prove” that spacetime itself acts and looks like the same thing when shifted from one position to another, but…Studying Bondi et al. previous works, it shows that the limit “far far away” r\rightarrow\infty do NOT spoils gravity and the gravitational force. Neither you get SR but an infinite number of extra dimensional symmetries and degrees of freedom appear! Spacetime remains floppy even at very large distances! It seems you can not spoil gravity completely. Now, one could protest…What happens to the equivalence principle here? That is quite a question. Since, even when “no gravity”, there is “gravity” left behind, a residual gravitational force is out there even when you can not see it or feel it. This residue remains and it is spooky. It has terrible consequences (please, don’t tell astrologists or futurologists about this, …PLEEEEASEE). Distant planets and stars, distant galaxies are not completely independently of one another after all. The Force is there yet. Furthermore, GR is NOT the same as SR even at long distances from the source (the difference would be unnoticed by graduate studentes, I presume, unless they read about it, or they were informed on this point). What is this influence? What is left unspoiled when far enough from the source? The magic word that gives name to this blog entry: supertranslations (please, don’t confuse them with supertraslations from supergroup and supersymmetry). 

What are supertranslations? Well, something easy to say, not easy to live with without care. Supertranslations are certain coordinate transformations given by ANGLE-dependent translations relating points at “infinity” far from a gravitating body. In other words, they are infinite dimensional symmetry transformations relating points at different angles from very far away sources. They form a group called BMS group. The mathematical code of BMS group implies that empty spacetime have a large complexity. From this viewpoint, for empty spacetime there are infinite ways to be empty. Of course, it seems nonsense but it has surprising consequences. That vacuum and vacuum spacetime have such vast complexity…Well, it is amazing. Vacuum is a complex thing after all. Every big question in the 21st century in theoretical physics is related to vacuum! The BMS group has another surprise, beyond supertranslations, you also have superrotations. Superrotations are generalized rotations (even rotational boosts get generalized) that takes the form of conformal complex transformations, they are related to the conformal rescaling of the metric, and, it turns (and I am sure this turns already stringers horny), that the superrotation charges form a Viraroso algebra or group. Supertranslations charges are supermomenta, and these are conserved due to the Noether theorem. Superrotations charges are much more subtle, they are superangular momenta. The BMS groups in 3 dimensions or higher have supertranslations and superrotations, but superrotations are bad beasts. It shows that they could you allow to create cosmic strings at the far extreme points of the Universe.

Time machine again. It is the 1930s of the 20th century. Felix Bloch and Nordsieck calculated that if you collide two soft photons at zero energy, the probability of the given outcome is independent of the number of the particles you produce and extra details. The same situation, it can be showed to happen if you collide two soft gravitons. In general, soft photons or gravitons are added due to supertranslation asymptotic symmetries! Apparently empty spacetimes should gravitate as consequence of these residual asymptotic symmetries of gravitational forces. Add soft particles to the vacuum does not change the physics, but it does contribute to global angular momentum and momentum. This vacuum degeneracy is striking. But it is not uncommon. Asymptotic symmetries have been discovered also in gauge theories. There, they have a much more big name: they are called “large gauge transformations”. In other words, large gauge transformations are asymptotic symmetries in gauge theories that don’t allow you to spoil the field completely. Vacuum is not “unitque”, it is defined modulo asymptotic symmetrics/large gauge transformations. Moreover, you can get even get asymptotic symmetries from other particles too, not only photons or gravitons. Strominger, in fact, has provided a new interpretation of supertranslations. Supertranslations ADD (soft) particles to (vacuum) spacetime. Are BMS charges real or a mathematical artifact? What are the effect of the BMS charges on the equivalence principle? Well, these questions are hard to answer, but there is another part of the whole story coming.

Circa 1970s. Zel’dovich y Polnarev discover the memory effect. Gravitational waves cause oscillations of masses and particles BUT also cause other stunning effect. Gravitational waves produce a permanent shift in position or displacement of any object that is crossed by them! They also cause a permanent time shift. That means, for instance, that mirrors of LIGO and other gravitational wave experiments do NOT return to the original position. This is a very tiny effect though. This effect is very similar to the dislocation effect on a crystal after a phonon disturbaces the atoms in the lattice. The gravitational wave here acts as the dislocation. Compère has remarked this effect. Passage of gravitational waves is like dislocation in crystals, and it produces a permanent displacement we could measure in the future with gravitational astronomy. If you want to look how memory effects do arise in ohter forces, see e.g., http.//arxiv.org/1805.12224 and references therein. Strominger, indeed, has envisioned a matrix solution to the BHIP (black hole information paradox) using the asymptotic group symmetry. It was joined by Hawking months before his death. However, there are puzzles with respect to this solution unsolved, and their proposal is yet not a convincing solution. The good thing is that we are approaching a new symmetry principle, beyond the diffeomorphism or Poincaré symmetry group. A new symmetry and relativity principle is necessary in order to manage a true unification of gravity and quantum mechanics. The dislocation effect of gravitational or gauge theory memories is that two observers, shift a constant distance after the gravitational (or the interaction wave) passes. Memory effects for electromagnetic and gauge forces are universal. Perry, Hawking and Strominger thought a solution of BHIP with these symmetries. Yet, how spacetime emerges from more fundamental symmetries like these BMS is not an easy task. Even worst, the hair or the extra charges introduced by supertranslations and superrotations could be not enough for saving unitarity!

In summary:

  • Asymptotic symmetries or large gauge symmetries are symmetry transformations that “die” at the far infinity from the source. Do you remember that High School boundary condition telling you that the potential at infinity is ZERO? Well, it is not just “zero”. Something lives there at infinity…Something is left. You can add any number of zero energy soft particles far away. But, wait, what is a zero energy particle?
  • Noether theorem applied to supertranslations and superrotations give you conserved hair or charges (an infinite number of them inded) called supermomenta and superangular momenta. Supermomenta can be thought as angular-dependend translations at the infinity. Superangular momenta are more abstract, they are just “conformal” transformations associated to Viraroso algebras. Superangular momentum charges can be imagined as the charges leaving invariant diffeomorphically a circle. That is, superrotations are just reparametrizations of the circle in abstract complex spaces.
  • Asymptotic symmetries are a step forward the new symmetry principle behind quantum gravity and the TOE. They are not clear enough to solve, yet, the BHIP, but they hint towards a new symmetry as well.
  • It is not true that taking the limit of infinite far distance, you spoil gravity or any other gauge force and you get mere special relativity or even the mere galilean relativity. Asymptotic symmetry changes this naive assumption. What is the destiny of the equivalence principle in this setting? Well, perhaps, the vacuum itself can not be defined excepting MODULO large gauge transformations/generalized asymptotic symmetries. The quantum rest/zero force/zero jerk/zero/pop/zero crackle/zero absement is meaningless/pointless/futile. You can always add zero energy particles to your vacuum. Recall the Unruh effect here. Is asymptotic symmetry telling us that the different accelerated observers could agree on their vacuum modulo soft particles?
  • Memory effects. Gravitational waves disturb permantly any object they pass. This constant shift is called gravitational wave memory. It could be measured in the future. Similar memory effects do exist in other gauge theories.
  • Interplay of certain Holy Trinity:

    \[\boxed{\mbox{Symmetry}\leftrightarrow \mbox{Soft particles}\leftrightarrow\mbox{Memory effects}}\]

Have you been supertranslated and superrotated? Probably yes, any gravitational wave crossing through your body permanently shifts your atoms and particles! Fortunately, it is a tiny effect. Hopefully, we will measure it in future gravitational wave telescopes! Disturbingly, any gauge force does the same and it can not be spoiled away modulo soft particles at your position and time. Even worst, OR NOT, any far away source of the Universe is linked to you via supertranslations and superrotations. Don’t tell any pseudoscientist abou it. Instead of blowing up their minds, they will try to publish and get money with that information. Mmm…Should I make money with supertranslations and superrotations? Just joking, see you in another quantum blog post!!!!!!

P.S.: Motto is that we were lied! Potential at infinity does NOT spoil the force (potential) to zero. Zero-point measurements are likely impossible in quantum fluctuating theories since there is not such a thing like a quantum rest. The quantum aether is “dead” as static thing. Is the cosmological constant a residual effect from the quantum unified field theory?

 

LOG#213. No boundary.

Hello, again!

Yes, three consecutive days posting. A change! I missed blogging these months of nightmares. I hope these posts terminate your wait. The wait is over. And I can not wait too. Tempus fugit.

Today, we will be discussing Quantum Cosmology. The no boundary proposal and related ideas. Let me add first that Quantum Cosmology is a fuzzy subject. There is no consistent theory of Quantum Cosmology yet, beyond what we accept from the cosmological standard model (LCDM) that is accepted as the macroscopic view of the Universe in corcondance with all the data. OK, those who follow the last news, you new there is a “crisis” due to some values of the Hubble parameter. 75km/s/Mpc or 67? This is not a real crisis. Not a big one at least. I remember reading as a child the controversy about H_0 being 50 or 100, even 200. That is not the point, I believe. I think everyone in the field knows that H_0 will converge in the end. The real point is to adjust the cosmological ladder with the CMB observations. Period. Of course, we will find (or not) new physics in the path, but the times we doubted about how old is the Universe seems to be almost over. OK, perhaps if a revolution arises…But revolutions are not periodic, revolutions take time to arise.

Well, to the subject! We already know the Universe (or Multiverse, but this is speculative) had to be hotter in the past. That is the Big Bang. Going even further, to the singularity…The Universe arised from some quantum fluctuations we do not understand (due to quantum gravity whatever it is), and it tunneled from “nothing” into “everything”. This tunnel effect could even be a mere bounce from another preexistent Universe, but we can not know with electromagnetism. At about 380000 years after the “singularity phase”, the Universe created atoms, and become transparent to photons when the first hydrogen, helium or even lithium atoms were formed. That is the origin of the Cosmic Microwave Background (CMB). Before that, we don’t have the enough information to tell,…But we do know the Universe passed the electroweak phase transition, the QCD phase transition creating protons and neutrons, and we know there is likely a cosmological neutrino background (CNB) emitted at the first seconds of the Universe, …Temperature about 2 K (1.945 K) or lesser, depending on the degrees of freedom available at the neutrino decoupling. Even before, to solve why the Universe is flat (please, don’t tell to flatlanders about this), the Universe had to suffer an inflationary phase (faster than the speed of light!) and nucleating likely baby Universes like us and creating a Multiverse. This view is not completely accepted, but it is true. It could show that bubble or vacuum bubbles nucleated at the inflationary point, creating not only a vast number of (currently unobservable island of Universes) but also primordial gravitational waves and/or primordial black holes we could detect in the next decades with better and better gravitational telescopes.

The idea of tunneling Universes and bubble Universes, from inflation, had one of his master minds in the head of Alexander Vilenkin. Furthermore, the idea of treating Cosmology from the quantum viewpoint was pioneered by James Hartle and Stephen W. Hawking, to whom I would like to dedicate these lines and this post as well. The Multiverse has infinite dangers. The Ancient One knew that when talking to Stephen Strange. She was right. The Multiverse also contains many inconsistencies. And the young Quantum Cosmology tries to solve it, but we are yet far to understand the whole Universe with quantum theory because we are lacking an essential part: the degrees of freedom. A misnomer for observables. The no boundary proposal, first introduced by Hartle and Hawking, joined to the Bubble Universe picture from inflation long ago, but there are not completely consistent. Indeed, they seem to be contradictory to each other. In a paper entitled “No smooth beginning for spacetime”, it was shown:

  • The Universe, if it emerges smoothly from the void/nothingness, would be wild and fluctuating. This is in apparent contradiction with the current observations.
  • The no boundary proposal, the idea that the universe itself has no boundary in time-space at the beginning, does NOT imply a large Universe like the one we live in. However, a tiny curved Universe that would collapse almost instantaneously would be produced. That is not the case since we are already here, alive. Yet.
  • We need something else. An extra idea or framework, other picture to understand the very very early Universe OR, otherwise, we must rethink the most elementary models of quantum gravity at hand at these times.

What do we know? Let me begin by some general key facts.

GENERAL relativity. The known agenda. The known known of GR.

  • Gravity=Geometry of curved spacetime.
  • Mass-energy curves spacetime. Spacetime tells mass-energy how to move.
  • Free masses move on “straight” paths in curved spacetime. Geodesics are the cool name of “straight” paths in curved spacetime.
  • General relativity can not hold at very dense extreme situations, concentrated at the smallest scales, without conflicting with our microscopic quantum description of the Universe. Therefore, we need a UV completion of gravity. The saint grail of physics: Quantum Gravity. QG would apply to black holes and the origin and extreme ends/edges of the Universe.
  • Dark matter and dark energy. We need extra missing mass-energy to explain how galaxies move. It shows this extra matter is about 1/4 of the matter-energy or 80% of the matter content of the Universe. It can NOT be Standard Model particles.
  • Dark energy. It is now the main component of the energy budget of the Universe. About th 70%. It is accelerating (positively) the Universe at largest scales. It seems it is related to the vacuum space energy-density. Dark energy is another name for a rebooted cosmological constant. It was introduced by Einstein himself and then deprecated. It seems…He was right. Dark energy exists. It blows up your mind trying to understand why the dark energy value has the value it has today. Best theories of Quantum Field Theory give so bad prediction for it that you ponder if you are understanding something really…

There are many other “issues” with GR and the LCDM view. But these would be some of the big “known known GR”. What about the Saint Grail? Well,…Not much, but:

Quantum Gravity Agenda. The known known we are hoping to get from it.

  1. Quantum gravity=Quantum theory of gravity=Microscopic theory of gravity.
  2. Quantum gravity meas a quantum theory of space-time geometry. That is, a quantum corpuscular view of the spacetime continuum.
  3. Quantum states for gravity are unknown today. They only theories giving hints of what these states are: superstring theory/M-theory, loop quantum gravity (LQG) and minor approaches (I am one of this third road) to review what quantum relativity or spacetime is. We lack a more precise counting or identification of the microscopic degrees of freedom of what space-time is made of. A secret that makes space-time itself that likely grants a Nobel Prize if proved. What are the origins of the Black Hole entropy? Remember, atomic physics arised when trying to explain solids, liquids or gases in the 19th century. Similarly, we are now in a similar position with space-time.

Quantum Mechanics agenda/The QFT keyfacts.

  • Main objects in QM and QFT are wavefunctions or quantum fields.
  • QM/QFT dynamics depends on quantum states/quantum operators. Quantum states satisfy certain wave-equations.
  • Quantum states/wavefunctions are NOT mechanical waves. Indeed, much to the deterministic philosophical fans, quantum wave functions are probability waves. Stunned? Wave functions are just complex numbers. The describe oscillations of fields. QM/QFT gives you a rule to calculate probabilities via the Born rule P=\vert\Psi\vert^2. Amplitudes are complex, but observables are real probabilities.
  • Duality. Any quantum field has wave properties and particle properties. Complementary aspects of the same object in different regimes. \lambda=h/p. Duality is consistent with special relativity SR.
  • Heisenberg uncertainty principle (HUP). You can not observe certain observables without altering others, simultaneously. Observations are context dependent.

    \[\Delta A\Delta B\geq\hbar\vert<\left[A,B\right]>\vert/2\]

  • States evolve unitarily, and they follow Schrödinger-like equations H\Psi=E\Psi.

The GQ agenda: two main paths.

  1. Loop quantum gravity (LQG). You can non-perturbatively canonically quantize gravity via connections variables that take the form of loops and include Hilbert spaces into this approach. It can calculate some BH entropy in certain examples with the aid of spin networks. Area and volume are quantized. Singularities are likely to be replaced by quantum bounces. Issues with the introduction of matter.
  2. (Super)String theory/M-theory. Quantum particles are really vibrations of supertiny fundamental strings (or p-branes).  Every fundamental force is derived from the string excitations. You require extra dimensions to do it consistently. Dual formulations and holography are also compatible. Issues: the predict a vast number of vacuum or possible Universes. No one knows how to select our vacuum/Universe from every possible configuration. This has produced the Landscape/Swampland nightmare. In other words, nobody knows how to select the field values to tune the possible configurations to our current Universe. Bad things? Maybe not. Maybe yet. Divergent views are available at this moment of time.

Quantum gravity needs and requires experiments, tests. We are now in the dawn of neutrino and gravitational wave astronomy. Maybe, we will have dark matter and dark energy astronomy as well! However, in the past, some proposals were done to try to make accessible tests of quantum gravity. There is a problem. QG is, at least from naive expectations, hidden at energy scales of 10^{16} TeV 10^{TeV}. No forthcoming collider is going to have these energies. The LHC works at about 10 TeV. Future colliders will likely work at 100 or even 1000 TeV. We are far by thirteen or fourteen orders of magnitude in energy! Solutions? Well, maybe quantum gravity energy is really much lower than taht. That is the idea of extradimensional gravitational/gauge models. The fundamental true gravity scale is high but it gets diluted into the extra dimensions, and that is why gravity is weak. The LHC has limited these possibilities. Option two, try to get the higher energy particles/interactions of the universe, via gamma rays, neutrinos or…gravitational waves! The true power of gravitational waves is that they can in principle test everything. They are probes of strong gravity, not only of weak gravity as from observations here on Earth!

In the beginning of the Universe, the expanding Universe already reached the Planck energy: we have 14 Gyr of data to accumulate! Before the expansion make other parts of the Universe non-observable! However, the dispersion is brute. Data is dispersed on 42 Gyr of distance due to the cosmic expansion rate.

Where does Quantum Cosmology (QC) enter this game? It yields that cosmological observations of the Universe over the Universe’s classical history could drive us towards a whole quantum view of the Universe. What are the main points of QC?

  • Most of our observations of the Universe on cosmological scales are properties of its classical history. For instance: homogeneity, isotropy, evolution of fluctuations giving born to galaxies and galactic clusters or superclusters. They all made up the CMB, galaxies, planets, biota, us, and every possible configuration. At last, you find quantum things…
  • The rate of expansion of the universe, the dark matter and dark energy, the radiation to other stuff rate are surely quantum stuff too.
  • Classical behaviour is a matter of quantum probabilities! Quantum makes up everything!
  • Classical behaviour is not general but an approximation to QM. I wish you read my previous logs about the non-deterministic formulation of classical theories.
  • A quantum Universe needs a quantum description. So, should the Universe itself have a wavefunction or state? What is the quantum state of the Universe? Note that this question is hard. It is also related to gravity. The target of Quantum Cosmology is to describe the quantum state of the Universe! But it is also a target of Quantum Gravity. Thus, quantum gravity and quantum cosmology are united, or at least, complementary. Remarkly, string theory or quantum gravity require quantum states! The tryumph of string theories over the rest QG approaches is that it gives a sense for what are these states. Fuzzballs, stringy states of stuff, p-branes…But, of course, the question would be…What is string theory after all?
  • Probabilities. Basically, run! They are the square of complex amplitudes. For real, the square of the operator valued quantities applied onto the quantum states.

So, does the universe have a quantum state? Is it unique? What is that state? We have only one system to observe from our current perspective (our universe). However, can we handle for other quantum states? In fact, there is a cool variation of this. It is called third quanntization. In 3rd quantization, you are allowed to create and destroy (erase) Universes from the Multiverse (just like Zeno-same in Dragon Ball super, no joke here!). Contemporary theories and final theories have two parts (three if you allow for a lagrangian!): the hamiltonian part and the wave function part. Hamiltonians define dynamics. Wave functions or states define kinematics. Even if you manage to elucidate what quantum states are, you are driven to the next level and ask what is the evolution following the state. This is other unsolved task of unified theories. Remark: regularities in the hamiltonian H provide classical dynamics, and it could be chaotic and not predictable. Regularities in the wave function arise in the observed classical spacetime. They also should explain early homogeneity/isotropy, inflation and have certain fluctuations giving rise to anisotropies producing our galaxies and clusters. The different arrows of time remain unexplained. Why is there only a single time direction? Why do not observe the future?  The CMB, the large scale structures, the existence of certain isolated systems, the topology of spacetime could change and introduce a spacetime foam into the picture. That would imply a varying number of dimensions of spacetime, even depending on the observed scale.

The problem of finding the quantum state of the Universe gave rise long time ago to an approach named minisuperspace models. That is quite a disturbing name since it has nothing to do with superspace or miniscales! In minisuperspace models, you are forced to suppose that the universe is certain homogeneous and isotropic closed metric

    \[ds^2=-dt^2+a^2(t)d\Omega_3^2\]

Matter is just any scalar field plus a cosmological constant. a(t) models the history of the Universe, and the Universe itself is associated to certain wavefunction \Psi(a,\psi), where \psi=\psi(x,t) is the scalar field. Over the 3-space surface, \Psi evolves like a wavefunction. The problem IS:

  1. The state is NOT an initial condition for the Universe, rather a description of it.
  2. It predicts probabilities for ALL p ossible alternative 4d histories of the Universe, though. What goes on now? What went on then in the past?What is future end of the Universe? We lack a dynamics here. Conservative approaches use the so-called Wheeler-de-Wit equation for \Psi, but it could have any other dynamics!

Fascinating! Now, the no boundary idea by Hawking himself (and Hartle, who worked out hardly on it). We should search for an analogue or a cosmological analogue of the ground state in quantum physics. Thus, no H to be a lowest energy. For a closed Universe you could take H=0. Inside the all possible 3-geometry is everything, what about the boundary? Euclidean sum over every possible 4-geometry with one boundary for the arguments of the wave function should be allowed only! In other words, the boundary condition of the Universe is that it has no boundary! Technically:

    \[\Psi(b,\chi)=\int DaD\psi\exp\left(-I_E\left[a,\psi\right]/\hbar\right)\]

This integral should be regular configurations for a,\psi which match the boundary at say (b,\psi) from a,\psi. The classical spacetime is just a semiclassical approximation. This object predicts an ensemble of classical histories similar to the WKB approximation in the classical field theory setting. The main problem with this proposal turns to be that not all classical spacetimes seem to be predicted. Take, e.g., inflation (dS spacetime!). The no boundary wave function proposal plus classicality over certain scales favor low inflation scales. However, it seems that we are likely to live in a Universe that has undergone more inflation than the one the no boundary proposal predicts. there are more places for us to be. The fluctuating probabilities start in a ground stante given by a dS spacetime (something technically called Bunch-Davies vacuum). The good thing is that large fluctuations would produce large anisotropies…However, they are not observed. Different CMB maps have restricted as you know the possible degree of anisotropies in the EM sector. Therefore, we have to seek a way to live with fluctuations tending to vanish…We need symmetry here. But no idea of what symmetry is there.

What about the arrows of time? We know there is a privileged fluctuation arrow giving rise to the growth of fluctuations in the Universe past. We also know about the thermodynamics arrow of time associated to the growth os cosmic entropy, the radiation arrow fo time related to the retarded EM radiation field and the psychological arrow of time that allows us to remember the past but not the future. In fact, the no boundary idea CAN explain these arrows of time by a special quantum state selection. This is true EVEN if the dynamical laws (GR+QM) are time resal invariant! What the no boundary idea says is that these arrows are essentially the same arrow. The fluctuations of the selected special wave functions vanish only at one place on the fuzzy Universe. It is an instanton like event. A South pole in the cosmic 3-geometry. Fluctuations are small at only one of the places where the Universe is small. For instance, bouncing universes (like the ones LQG favor) have fluctuation with increased values away from the bounce on both sides of the 3-geometry. The arrows of time points in opposite directions on the opposite sides of the bounce. That is why we can not observe or remember the future. We are doomed in one side of the 3-geometry! (Note: what if we are entangled to other cosmic “clons”? Yeah, this is a crazy idea. Sorry to mention it).

Other big unsolved problem of Quantum Cosmology is named as “the topology change of spacetime problem”. Simply, the topology of the Universe could be allowed to change from a quantum viewpoint. That is not observed, of course, at macroscopic scales. But it comes as inevitable as far as you go higher and higher in energy, and you go to QG. Spacetime is manifold. Manifolds use to have metrics, isometries and symmetries. What is indeed our macroscopic manifold? What is its topology? No much is known about this from cosmological data. In fact, things get complicated whenever you make the metric or the manifold quantum stuff. There is no unique definition of quantum geometry of spacetime at this point! However, the idea of the no boundary can predict simple stuff and probabilities for the change of the topology of the spacetime. Note that the large scale of the topology of our Universe could be observable by its effect on the CMB and CNB or even other future observations. However, there is no information about how the geometries behave on topologically complex manifolds.

To summarize up:

  • We do not know the topology of spacetime.
  • We do not know how small and large dimensions are related. Evidence that the microworld is twodimensional somehow is known. The macroworld seems to be 4d.
  • Should we allow for more that one time dimension?
  • What about the coupling constants in the Universe?
  • Classical spacetime, early Universe, fluctuation, arrows of time the CMB and even isolated systems can be handled with no boundary proposal.
  • What about singularities and black holes?

GR became important in 1956 when Robert Oppenheimer highlighted that GR is important when GM/rc^2\sim 1. The Universe is about 10^{28} cm big today, but by the end on the inflation had only about 10^{-30} cm only! Primordial black holes could be evaporating today and be one of the sources of cosmic rays, fast radio bursts or even some strange emissions in the observed Universe. GR is the key to the quasiclassical realm an central to the origins and properties of current isolated subsystems. After a century of GR, GR is as central to physics as QM and the SM. GR reigns over the macroscopic scales. The SM rules over the microscale. Not good enough. we have apparently reached a limit of our comprehension of the Universe. everything is quantum and relativistic, BUT, we do NOT know how to study big heavy objects tiny enough. Spacetime is like a fluid of something else. QM says that action (energy and angular momentum as well) is quantized. What else?

Galaxies are the building blocks of the big Universe out there. The universe is indeed “a gas of galaxies” somehow. There is about 10^{12} galaxies out there, and new galaxies become visible and others disappear every second due to cosmic expansion. The Big Bang was a special moment in time, not a place in space. Space itself did not existed before the Big Bang by definition from our current conceptions of the Universe. The matter was infinite dense, infinite hot, and infinite opaque (to EM at least) at the beginning of time (but GWaves come to the rescue here!). You could see the BB light in your ancient TV devices (1% of the snow are BB photons). That the Universe had a beginning of time seems inevitable from QM+GR. This is quite a shift from Aristotle, Copernicus, Kepler, Newton and Einstein himself times. All these guys thought the Universe was STATIC, nondynamical. The hope is that quantum cosmology, the fusion of QM and cosmology, will allow a deeper insight into the creation of the Universe and its future evolution (if not chaotic!). Deterministic classical physics (newtonian physics) is an approximation to QM whose probabilities depends on wave function amplitudes and its square gives you probabilities. That is, if QC is right, you could only know what are the probabilities of your future are, not what is your future going to be or to happen. The wave function of the universe predicts probabilities for histories of the Universe and everything on it. Even you can have classically forbidden histories as well. In other words, knowing the wave function of the Universe  is not a way of knowing how to be rich or inmortal, it is a way to see what the odds are of being rich and inmortal. A SETI like question: what is the probability that there are more than 1000Earth-like planets in the galaxy hosting intelligent life that we could communicate with? In principle, that is a fair question in QC. However, it is very hard to calculate that. We do not know how to compute those probabilities for stars or galaxies or planets. However, it seems than gravity is universal everywhere (apparently the same gravitational constant everywhere out there) and newtonian gravity works well for galaxies (excepting some basic GR corrections). However, at the beginning of time, GR can not be valid. Or even in critical BH systems we do know GR is not enough. Even when its lowest energy state is reached, quantum fluctuations make quantum rest an absolute impossibility. However, the no boundary idea could provide a way to give up the initial singularity. A quantum regime (with properties to be defined) at the beginning of the time, could provide a way to define a non-singular beginning. However, the size of the smoothness at the beginning can not be arbitrary high or low. Specially if you want to get our universe.

As a final part of this essay, let me remark about the question of relativity and observers. Observers are an important part of physics:

  • We are physical systems within the universe with only a very small probability to exist in any region!
  • Probabilities for what we observe are conditioned on us being here. Observations change the odds and the states, that is a basic result of QM!
  • We won’t observe what is where can not exist.
  • The cosmological constant of the Universe is the main component of our current state.

No boundary does not predict: large dimensions, small dimensions, extra time dimensions. No boundary predicts classical features, early homogeneity and isotropy, the arrows of time, the CMB structure, isolated systems and GW from early times, plus parameters like the cosmological constant.

Unless probes of extra time dimensions are found, space travel is pointless (unless you figured out how to get a wormhole, an Alcubierre drive or any other form of faster-than-light propulsion). Stars and galaxies seem to be doomed in the future. Atoms or even protons could decay and get destroyed by dark/phantom energy. In the end, black holes will evaporate and matter as we know will vanish…Leaving what? That is an issue of QG and the quantum structure of the vacuum and spacetime itself.  We will end cold, dark, empty, lonely and simple constituents. Space communication will be more expensive in the future of the Universe due to cosmic expansion. Galaxies will be too redshifted, we will become alone in our local group (merged into a single decaying galaxy?). For the whole Universe, it seems there is no single slit experiment to try. Most of what you are is here from the seeds at the beginning of time. Frozen accidents of the Universe we are. How to calculate the state?How to evolve the state? What are the odds to survive the future?

Strings are the biggest hope for unification, even when no boundary and other LQG ideas are into the market. Gravity is derived from strings and its coupling constant G_N=(g_s/M_s)^2, cf. with the g_F=g^2/M_W^2 origin of the Fermi constant. The problem with strings is that they can say everything about nothing. They seem to imply an UNOBSERVED higher dimensional spacetime, and no adjustable dimensionless parameters arise, but vacuum expectation values at inevitable symmetry breaking events can not predict what our Universe is between zillions of possibilities. The landscape. Is just a real landscape or just the same seen from different observers? Anthropic ideas yet valid here? We are far from inflationary era, about ten to the sixty one orders of manitudes away but we know something like inflation should occur. Otherwise, the Universe could not look like this one. We would not be reading me right now. The holographic principle says that boundaries are important places. Even if no boundary at hand? Words to remember today:

  • No boundary and wave functions of the Universe.
  • Gravity, thermodynamics, entropy and arrows of time.
  • Gravity can be seen holographically as a field theory on the boundary.
  • Gauge-gravity unification is inevitable at some point.
  • Observers are important in physics.
  • GR and QM are not enough to explain the state of art of the Universe at critical points (superdense, superheavy, supertiny objects).

Epilogue: some people like A. Valentini, L. Smolin, R. Penrose, and many others have speculated about the idea of QM not being the final story but being an approximation to another theory. It can be true. I find hardly to be true that it could give up or erase probabilities. We abhor probabilities because we do not like uncertainty (do we hate free will at movies?Just curious). Valentini idea is that maybe the Born rule is just an equilibrium law. Non-equilibrium QM and extensions of QM could give rise to non QM predictions testable at the near future. For instance, we could get quantum noise from the Big Bang, and the fact non local entanglement could be caused by position dependent wave functions is a possible solution, but it has not produced any experimental departure from quantum mechanical predictions. Problems with this idea, is that non-equilibrium particles could be used to send faster than light signals to the past and to define absolute time we gave up with current theory of special relativity. Causality could be challenged by this picture as well. However, the issue of why quantum mechanics prefer a probabilistic interpretation with the squared modulus rule is a mystery than only a few scientists have tried to face with. Others, simply, in time, we become realistic in the sense we only worry about probabilities and their realizations and measurements as the real problem to understand the universe. That is, the complexity are calculating probabilities, and to know why are the states we observe as we observe, not why they arised or what selected them from all the mathematical possibilities the abstract mathematics could have realized. Why are we here? What are the odds to survive in the near future? Is that all that matters after all?

    \[\boxed{Z=\int DX\exp\left(i\int d^4x L_{TOE}\sqrt{-g}\right)}\]

    \[L_{TOE}=R-F^2-G^2-W^2+i\overline{\Psi}\Gamma\cdot D\Psi+D_\mu H^+D^\mu H-V(H)-\lambda\overline\Psi H\Psi+h.c.\]

LOG#212. Understanding gravity(II).

Hi, there!

In the previous blog post and log, we introduced the high energy stringy part of the 4-point (particle!) tree approximation amplitude for the graviton scattering in type (II) superstrings \mathcal{A}_{stu}

(1)   \begin{equation*}\mathcal{A}=G_N\dfrac{<12>^4<34>^4}{stu}\dfrac{\Gamma(1-\frac{s}{M^2})\Gamma(1-\frac{t}{M^2})\Gamma(1-\frac{u}{M^2})}{\Gamma(1+\frac{s}{M^2})\Gamma(1+\frac{t}{M^2})\Gamma(1+\frac{u}{M^2})}\end{equation*}

This “simple” formula gives no negative probability due to positivity features it owns. Why quantum physics is about amplitudes and probabilities? Well, that is a good question. Perhaps, it is the history of Science the bias, and the thing bayesian probabilistic methods are not as popular as deterministic and frequentist approaches. The battle bayesianism and frequentist is yet seen at statistics and experimental science, but they are somehow completary views…Despite the fact, bayesianism is far superior in many (not all!) the cases. Focusing on the subject above, classical physics is just a deterministic theory just when formulated in deterministic ways. I mean, newtonian mechanics and classical mechanics formulated in terms of “forces” under the general principle F=Ma is a deterministic approach, because it states that…In order to know the position (in the futuro or past) from any particle in the universe, you only have to know the mass and the forces act on it. Period. The rest is a calculus problem. You can do it manually or with the aid of computers for bigger and more complex systems (fluids and non-linear systems are in this category). Of course, the loophole is the knowledge of the initial or boundary conditions. You can not know the exact position of a particle without initial conditions. However, classical physics admits a non-deterministic formulation. It was created during the 19th century. It is based on energy (kinetic, potential and others) and the action principle. Any classical theory under very mild assumptions can be formulated in terms on a minimal (more precisely critical) action principle with certain hamiltonian/lagrangian function and so the requirement \delta S=0 is seen as more fundamental. Why is the principle behind S=\int_\gamma L or S=\int_\Gamma H non manifestly deterministic. It is related to the fact that configurations of the particles are not fixed, but varying in configuration space or phase space. Indeed, this formulation allows us (Feynman was the creator of this approach following a Dirac suggestion) to build up and formulate quantum physics in a much more parallel way to classical mechanics. Thus, the Schrödinger path to quantum mechanics (QM) based on the solutions to H\vert\Psi>=E\vert\Psi>  turns to be changed in a more lagrangian or action (functional) way with the aid of action principle. The point is that, in order to understand QM from this viewpoint, you must accept that particles or fields follow every possible configuration or path through space-time. The amplitude for  the infinite paths is usually written as a sum, really it is a formal integral over certain infinite-dimensional space (infinite-forms are such a thing no mathematian surely would try to visualize here! Sorry, my dear mathematicians, infinitedimensional forms are “real”. Just joking, I am sure you know those things really exist, don’t you?). The amplitude reads

    \[\mathcal{A}_Q=\sum_{paths} c_\gamma e^{iS/\hbar}\]

or

    \[\mathcal{A}_Q=\int\left[DX\right]  e^{iS/\hbar}\]

Infinite dimensional matrices and spaces are “normal” in quantum physics, not so in classical physics (unless you live with fields instead of particles). The Nima’s work in the last years has been devoted to find out a new QFT reformulation where locality and unitarity emerge as some kind of deformation from a new mathematical structure and new objects arise. The real reason of this need is also related to Feynman graphs. They are complicated to handle with in complex theories. Physical theories require renormalizability and finiteness in the results. Regularization of sums and integrals are complicated. When considering gravity, the only true universal force on every scale, things become nasty at higher order loop corrections. You can not calculate by hand every Feynman graph correction. Computers will be necessary…Even if some new mathematics arise. The amplituhedron and related objects are not new really. They are variations in projective geometry. Anyway, it stuns everyone when you learn that amplituhedra allows you to simplify some calculations that Feynman graphs would do harder. Perhaps, this is a motivation of every physicist too: to simplify things. Locality is just a condition that can be seen a consequence of a constraint in momentum (\sum_i p_i)^2=0. Unitarity, beyond the hamiltonian evolution as a phase factr U=\exp(iHt/\hbar), can also be understood as a factorization property of Feynman graphs. Amplituhedra have relatives. They are named with even cooler names: associahedra, EFT-hedra, positroids, matroids,,…But they are almost all in the same category. They involve some type of combinatorial (polytopal) geometry (now, have you recalled that loop quantum gravity and spin networks have similar structures too? Please, don’t hate competitors…Make them yours!). Matrices are usually generalized into tensors or multiforms. However, there are other possible genrealization of matrices based on the notion of linear independence. This drives you into the idea of matroids. Positroids are certain type of matrics with “positive” properties (of course, no one likes negativity…True?). Lam polytopes or lamtopes and positroidrons (not they are not drones, sorry) are certain objects currently being studied by some eccentric mathematicians (however the sample of mathematicians studying these things are low, few people like strange things, and no, I am not meaning the Stranger Things Netflix TV series). Well, how to live with these “new” things? Where the amplituhedra emerged first? The answer is planar super-Yang-Mills (SYM) theories. Twistors (Penrose was so great creating them!) are mathematical objects encoding spacetime kinematics in SYM (also in superstrings, but that is another story) via:

    \[\overline{\lambda}_a=\dfrac{<a-1\;\; a>\mu_{a+1}+<a+1\;\; a-1>\mu_{a}+<a\;\; a+1>\mu_{a-1}}{<a-1\;\; a><a\;\; a+1>}\]

You can build up a vector

    \[Z_a=\begin{pmatrix}\lambda_a\\ \mu_a\\ y_a\end{pmatrix}\]

where Z_a is equivalent to t_aZ_a. This equivalence relation defines momentum twistor physics!

This equivalence relation defines momentum twistor physics! In the previous blog post, I remarked the amplituhedron is related to the volume of certain infinite-dimensional complex amplitude. Is there a more concrete definition at hand? Well, verbose mode on…The formal definition of the amplituhedron is that of a positive grassmannian G(k,n) scpace of k-planes in n-dimensions. The hope is that similar objects (or generalizations) can be found to simplify hundreds of pages of Feynman graphs into single pages. Computationally, given the fact we are entering the AI and Big Data era we need to keep things simple. The amplituhedra is, therefore, certain asymmetric matrix (matroid) kxn with entries in GL(k):

    \[\mathcal{C}_{\alpha a}=\begin{pmatrix}c_1 & c_2 & \cdots & c_n\end{pmatrix}\]

The positive grassmannian, the space G^+(k,n) of all minors with positive entries is just another name for the amplituhedra! Can you explain it to your children and kids? Well,…Maybe. The amplituhedra can just be seen as well as a generalization of a beautiful idea concerning High School triangles. Draw a 3-vertex triangle and mark the vertices and the center as the baricenter or center of mass inside the triangle. Define

    \[Y=c_1z_1+c_2z_2+c_3z_3\]

where z_i are the triangle vertices, and Y is the center of mass. The center of mass is a positive quantity in the sense you can find out c_i all positive giving you the center of mass! You can go further, and generalize this for any convex polygon. You can find the center of mass

    \[Y=c_1z_1+\cdots+c_nz_n\]

for a matrix Y=cz, where the numbers c_i, z_i live in G^+(1,n) or G^+(3,n) respectively. 2D-cells of these things are “triangles” but going more general you can build up G^+(k+4,n) or G^+(k,n) spaces, tied up to supergeometry in projective spaces! The amplituhedra are just objects transforming external particle data into polytopal shapes! This trick, just new, allowed Nima and collaborators to write all loop integrands and understand them as certain “faces” (polygons/polytopes) of the amplituhedra. The simplest way to understand the amplituhedra is, perhaps, the following statement:

“Polytopal shape=(Positive grassmmanian)(External data)”or

Generalized polytope amplitude=Amplituhedra times data

where the polytopal shape lives in certain (non positive grassmannian) just as the external data, but it turns that the amplituhedra must have certain positive properties to handle with consistent quantum mechanics. Thus, the amplituhedra generalizes classical triangles, polygons and polyhedra. It can be thought as certain apeirotope. It also highlights the need for new particles, since it arised in the context of SYM theories. The space of k-planes in k+4 dimensions in particular. The volume of the amplituhedra IS the all loop scattering amplitude. Certainly, it mimics the S-matrix ideas of the 70s but it seems something else. Time will tell if this technique evolves. However, I believe no High School student will learn (yet), triangles from the positive grassmannians, lol. The amplituhedra, as every generalized objects, is an abstract entity hard to visualize. Let me add that momentum twistor spaces define kinematics, but the grassmannian positivity of the amplituhedra encodes the dynamics of the field theory! Furthermore, the shape of the amplituhedra gives you physics. Locality and unitarity are related to the positivity. Then, think the amplituhedra as certain generalizations of the positive grassmannian G^+(k,n). Now, we can speculate: 

1st. Beyond dualities, no equivalence betwen existing gauge redundant descriptions was found. Maybe, a new approach is necessary in order to understand the origin of dualities.

2nd. Amplituhedron discovery: it implies, maybe, that we can understand representations of unitarity and locality without any evolution in space-time. Spacetime (GR) and quantum mechanics could be an illusion from shapes of higher dimensional objects. Yes, Plato’s cave returns…

3rd. Gravity and strings links. There is a yet unexplain magic even in planar SYM, related to the connection between gauge theories and gravity. Identities between color orders, helicities,…Similar strange combinatorics arise in the so-called BCFW formulae. Not just emergent string theory but also string theory plus QM arise from combinatorial geometry. Heritage from the spin networks and the twistor program. It hints any ultimate description of reality in terms of purely combinatorial (discrete, finite!) objects, even if the objects are infinite dimensional theirselves. Remarkly, the moonshine conjecture and its generalizations seems to be a further source for links into this continuum-discrete relationship.

By the other hand, what kind of physics can we do from the amplituhedron? Projective (complex) geometry mainly. Related to positive grassmannians, as we have seen, and extended to polytopes in certain likely infinite dimensions. The grassmannians work like triangles or “faces” and their interiors. The amplituhedra are like being inside of polygons/polytopes. You can define certain invariant “forms” (differential expressions) related to the singularities (particles) on the boundaries of the amplituhedra. Twistor-like variables encode the symmetry of the Lorentz group in 4d (specially in SYM) but, in principle, could be exported into more general spaces (that is the real hope…). The invariance of the amplituhedra are indeed more fundamental in the sense they allow you to derive (under positivity) unitarity and locality. Locality+Unitarity=Positivity. That is the point behind amplituhedra. In fact, Nima has tried to go further with EFThedra. Black holes are the end of reductionism in the sense they imply a breakdown of known physics. They also imply a link betwen IR/UV scales. Black holes, at least known black holes, are astrophysical entities. They are much greater than the Planck length. Therefore, understanding black holes also means, in the end, to understand short (ultrashort) scales. High energies are connected ultimately to long distances via gravity. Gravity is inevitable. It is universal. The holographic principle states, however, that local field theories fail to capture gravity essentials. The maximal number of degrees of freedom in any theory containing gravitational stuff is bounded. But it does NOT scale as the volume, as one naively would expect from QFT. It scales as the AREA of the boundary region. N\sim \exp(A/4G_N). This surprising fact constrains BH physics and make difficult field theory descriptions of it. Gravity is, generally, the weakest force. Extreme conditions near black holes, near black hole singularities, change this. During the last 40 years, even more if you take into account early Einstein trials, we have tried to go further GR. Einstein gravity signs that is uncomplete in the sense it predicts its own fall-down. String theory enters here as a the greatest hope. Key words: supersymmetry and positivity are well known to physicists from the 70s and 80s of the 20th century. Concrete examples of theories where physics of spacetime and the QM rules are sen to be derived from something more fundamental and combinatorial objects are known, even when they are not abundant. Nima’s great discovery of the amplituhedron and relatives in SYM N=4 theories points out in this direction. In the cosmological set-up, cosmological polytopes and generalized associahedra could also be possible. Hidden positivity and geometry of causality and unitarity, let Nima speculate about other entities: EFThedra and CFThedra from Effective Field Theories and Conformal Field Theories, respectively. However, these abstract entities are not yet fully understand, not even known! However, it shows that these objects WERE indeed imagined before. Gelfand in the 90s studied the grassmannians and generalized determinants, also arising from quantum black holes and quantum information in the form of hyperdeterminants, in the formula of black hole entropies! From polytopes, via Xhedra, to amplituhedra and beyond all of them is a secret that makes all of time and space…Motivic geometry and Grothendieck theory of motives are just one of the deepest aspects of geometry and number theory links that are also being involved into physics these years.

In summary:

  • Two principles of current physics, GR+QFT(SM/QM), are yet into trouble when we put both together.
  • Eventually, you will put enough energy to create a black hole and gravity then wins (usually, it is the weakest force, but no anymore). Normal GR breaks down at this point, and we get messy things, like information loss or non-unitarity in a simple naive approaches.
  • QM says amplitudes are the main observables, but what are the observables is another story. I mean, what are amplitudes and how they relate to particles or fields are the real problem. Philosophically, you can hate to transform the deterministic view of your pathetic life and reality in a wonderful non-deterministic view of your possible or unlikely possibilities. Hints: true observables, according to holography, live on the boundary, not on the bulk, when including gravity.
  • Action principle is yet fundamental, but it also could be derived from more fundamental stuff.
  • Not every consistent QG (quantum gravity) appears from string theory (ST). QG hints that duality is somehow fundamental as framework. The symmetry behind dualities and the generalized equivalence principle, quantum, from dualities is yet to emerge.
  • Specific predictions of HEP and Cosmology are necessary for new approaches in fundamental physics. The amplituhedra and other combinatorial structures are just mathematical constructs. We need concrete predictions going further. However, simplifying things and computations are also good things in the forthcoming AI (aided) era.
  • QFT redux: string theory has gauge+gravity theory implications. Till now, it is the only consistent theory that, despite its success, loss its experimental support (does it? Look at dark stuff, matter and energy). Maybe string theory is already here?
  • Gravity is the weakest force excepting in the strong gravity sector: black hole physics and the beginning/end of the universe.
  • The fate of the Universe seems to be inevitable from the current almost de Sitter phase of positively accelerated expansion of the Universe.
  • Black holes and more genrerally speaking, horizon spacetimes emit particles. Global charges are violated by semiclassical black hole physics and the black hole evaporation. Caveat: maybe the degrees of freedom are not available to every observer or, there are some extra fields carrying out the information in the evaporation phase when reached the transplanckian zone. However, no solution is yet at hand.
  • Firewalls imply that you can not save the following three pirnciples: 1) Equivalence principle, 2) Locality, 3) Unitarity. This amazing fact, also implies that in any theory with finite range corrections of gravity (breaking down the equivalence principle) you would find out that your theory containing light extended objects enforces you to not have long range inflation models. From inflationary theories, it is a problem.
  • We do not know what are the observables at the edge of the Universe, since we live inside it…
  • SUSY and quantum dimensions. By quantum dimensions I mean fermionic degrees of freedom. Spin does exist, so fermionic dimensions exist, as fermion fields! So, maybe SUSY has already been discovered. We know electrons, muons, taus,…I know. You want a proof. There are indeed stuff with classical anticommuting features like femions AB=-BA. High school teachers (yet) show you cross products to some of you…Aristarchos believed in the heliocentric model despite their critics said if true parallax should be found. Indeed, they calculated the parallax effect in the Greek times, but they were neglected since it would imply that stars would be very far away. Indeed they are. Maybe, the problem with supersymmetry is supersymmetry.
  • Accidents. Neutron and proton systems are bounded systems, but neutron plus neutrons are not. This is a fine tuning by 1%. The electroweak symmetry breaking scale seems to be fine-tuned. The moon seems to be eclipsing the full sun seems to be a fine-tuning. The low quadrupole of the CMB power spectrum seems to be fine-tuned. The critical density is too close to the actual current dark energy density. The vacuum energy, if the cosmological constant is equal to it, must to be cancelled out with a fantastic fine-tuning of precision in 122 orders of magnitude (61 with supersymmetry or so).
  • The Multiverse. Some people think the multiverse is inevitable. No multiverse theory indeed is available so far. We are limited to our horizon observations. We can not know what lies beyond the event horizon of black holes or the almost dS phase of our Universe. Edge states and their observables are unknown from Quantum Cosmology. Indeed, there is no accepted quantum cosmology theory right now. Analogy with black holes yield: no global picture, unknown number of multiverses (universes), degree of imprecision of universe entropy is unknow, as the correlators of the field at the edge or boundary of the Universe are unknown. Multiverse is about unknown unknowns!

Epilogue:

  1.  Why is there a macroscopic Universe? Response is Quantum Mechanics AND Relativity. Whatever it lies beyond, they must reduce to them. Whatever the ultimate theory is, it implies surely QM+Gravity(GR) and SR.
  2. At long distances, spins of particles are 0,1/2,1,3/2,2,…From these, only spin 3/2 particles are lacking after the Higgs boson, spin zero, discovery and the gravitational wave discovery (spin two).
  3. Central drama of physics: QM+GR are incompatible in extreme cases. Black holes and singularities are the real problem. The end of space-time could mean that also QM should be modified (not abandoned!) in cosmological or extreme situations.
  4. The cosmological constant is too small.
  5. The reason because the Universe contains big things is the hierarchy “problem”. Gravity is weak, and thus, Earth or elephants are big, and atoms or subatomic particles are small. However, at some point, we will probe tiny heavy objects. There, QM+GR predictions fail. We need something else. Quantum Gravity.
  6. Associahedra vs. permutohedra vs. amplituhedra. There is also cyclohedra (Bott-Taubes polytope). Permutohedra are generalizations of n-dimensions from the hexagon. Permutohedra are somehow n-dimensional honeycombs! They have (n+1)! vertices. There is also the Tamari lattice, the Leech lattice and other cool discrete spaces in higher dimensions with strange shapes and properties. The associahedra are related to the different way to bracket a string of n-dimensional letters x_1,\ldots,x_n.
  7. Tension of p-branes are known to be:

    \[T_{Dp}=\dfrac{1}{g_s(2\pi)^pl_s^{p+1}}\]

See you in my next blog post! (A rough one on Quantum Cosmology!)

LOG#211. Understanding gravity (I).

Hi, everywhere! Have you missed me?

A little short thread begins. This thread is based on some Nima’s talk on the understanding on gravity. Do we understand gravity? Classically, yes. Quantumly, it is not so easy. String theory is the most successful approach, but canonical quantum gravity or loop quantum gravity is interesting as well, despite some hatred and pessimist reactions to this conservative alternative to strings. Even more recently, two papers appear suggestion a deeper relation between these two approaches, sometimes considered far away from each other…

Nima Arkani-Hamed is a powerful theoretical physicist. In the last years, he has fueled some insights on string theory and quantum field theory pondering about the nature of the ultimate physical theory, the theory of everything, and quantum gravity. What is he working out? A summary:

  • General relativity (GR) and Yang-Mills theory are inevitable and, in the end, they must “merge” or “emerge” somehow from the microworld. Frank Wilczek told me once on twitter, that GR is strikingly similar to some non-linear sigma models known in quantum field theory but not just “the same thing”.
  • The cosmological constant problem and the epic fail of quantum field theory to guess a reasonable value of it (compared with the observed value), and the most certain fail of modified IR gravity theories (till now), as solution. Even MOND (MOdified Newtonian Dynamics is unclear on this stuff) is not accepted as plain solution despite the absence of positive finding of dark matter, so its surprisingly precise fit to galactic scales (not so good at bigger scales) remains as a mystery and issue.
  • Massive gravity theories and related issues with higher derivative theories (Fierz-Pauli, DGP, galileon theories, TeVeS, ghost-free higher derivative theories,…). Experimentally, they are not supported and yet they have been evolving in this experimental era. Likely, the rediscovery of certain ghost-free higher derivative theories, and new BH solutions with scalar hair have rebooted the interest in these theories, again, despite the lack of experimental support, what is not bad since we are limiting the space of available theories to fit the data.
  • Horizon thermodynamics and violent crashes with non-locality. The black hole information problem remains after more than 40 years of the discovery of Hawking radiation (R.I.P., in memoriam). Are black hole horizons apparent as recent ideas claim or are they real? Reality is a powerful but likely relative word.
  • Inflation expectations from the CMB observations. In the next years, the hope to observe the B-mode of the inflationary phase of the Universe in the Cosmic Microwave Background has grown. Living with eternal inflation would push us to adopt the Multiverse scenario, something many people seem to see as cumbersome, but others see it as natural as the solution to the Quantum Mechanics interpretation problem.
  • The end of space-time. Particle physicists are familiar with the idea of unstable or metastable particles. Even the proton would be unstable in many Grand Unified Theories (GUT) and the TOE. Hawking radiation is important because it also would imply that not only black holes, the space-time itself could end or evaporate into “something” or into nothing! Of course, this seems crazy. Accepted known Quantum Mechanics only allows for unitary evolution, allowing something to disappear, would introduce the idea that space-time itself will decay or disappear in the far future. We want to know in such a case why or not the spacetime disappear, and what is the final destiny of the Universe and its spacetime. We don’t know that, as we do not know what happened in the Planck era or before. That is important,…Not for us, but the destiny of our species depends on it. It is a thing that will be important for our descendants unless you think we will not survive the forthcoming crisis.
  • Physics with no space-time. Can we formulate physical theories without any reference to space-time at all. How could such theories look like?
  • Surprising new mathematical structures arising in quantum field theories and general relativity, specially those arising from algebraic geometry and/or combinatorial geometry. Physics has always been boosted by advances in mathematics. It is very likely the next big revolution in physics will happen when new mathematical structures and methods emerge from some sides. In QFT, the polyzeta and polylogarithms have appeared as wonderful recipes to some closed formulae problems. In general relativity and supergravity or superstrings, the duality and the brane revolution are pushing towards a categorical and algebraic differential (likely differentigral in near future) structures yet to be fully understood, and likely appreciated, by the academical community.

Where are we now? It is hard to say. The Heisenberg uncertainty principle, as you know, surely implies that there is something like an “ultimate microscope”. \Delta E\sim 1/\Delta t means that, eventually, as you put more and more energy into a single “point”, you would create a black hole. No distance or time, based on usual or commonly accepted riemannian (and likely differential!), can be have any sense when you reach the fantastic ultrashort size of 10^{-35}m, or the time scale of 10^{-43}s. No operational definition of space or time is available, in terms of conventional geometries, at those scales of distance and time. Volovich suggested, almost 30 years ago, in 1988, that the ultimate physical theory would be number theory. However, this statement has not been developed further, beyond some exotic researches on p-adic and adelic geometries that, however, are being growing in the last years. The point is, of course, where space-time notions end. There are two main places where physicists are for sure convinced they need something else to current theories: the Big Bang and the black hole singularities (and maybe, tangentially, at the black hole horizon and or those dark matter and dark energy stuff nobody understands, due to the fact we do not understand GRAVITY?).

Let me be more technical here. Quantum Field Theory (QFT for short from now), says that vacuum is not static, is also dynamical. It polarizes. So, every old grade course of electromagnetism is not completly fair when telling you vacuum does not polarize. Classically not! Quantumly, YES! Mathematically, QFT provides a recipe to calculate the effect of vacuum polarization through loop integrals in Feynman graphs that are mathematically evaluated into logarithmic integrals, WHENEVER you plug a regulator. That is, a high energy scale \Lambda_{UV} where the usually divergent integral gets regularized to provide a finite value. For one loop, momentum P, and supposing M_P^2 finite, you get something like this:

(1)   \begin{equation*}\dfrac{P^4}{M_P^2}\log \left(\dfrac{\Lambda_{UV}^2}{P^2}\right)\end{equation*}

The Planck mass squared comes from the two vertices from the loop, the 4th power comes from the edges and the logarithmic regulator of the squared cut-off appears due to finite expectations on general grounds. Actually, Higgs physics is important because Higgs mass is sensitive for such logarithmic corrections as there is NOTHING, absolutely nothing, in the Standard Model allowing the Higgs remains so light (125 GeV/c²) as we measure it! Note the similarity between this Higgs physics and the formula above. Any reasonable force, indeed, can be expanded in terms of energy scales in QFT. You would get:

(2)   \begin{equation*}f=F_0+\dfrac{1}{p^2}+\dfrac{1}{M_P^2}\log P^2+\dfrac{1}{M_P^2}\delta^3(r)+\ldots\end{equation*}

and where the divergent parts are usually neglected hoping that some further theoretical approach will teach us why they are certainly negligible under the floor…

Now, enter into the gravity realm. Some times is said that quantum gravity is “hard” or impossible. That is not exactly true or accurate. There are in fact some quantum gravity calculations available. For instance, the leading quantum correction to the newtonian force is provided by the following formula (up to some conventions with the numerical prefactors):

    \[F_q=\dfrac{GM_1M_2}{r^2}\left(1-\dfrac{27}{2\pi^2}\dfrac{G\hbar}{r^2c^2}+\cdots\right)\]

and you can see the classical newtonian force plus a leading order correction. I recommend you the papers by Bjerrum-Bohr on this subject. Then, you may ask, what is the problem? Well, the problem is…You can not do the above calculation for any LOOP order. Only certain theories, like maximal supergravity (due to its hidden and exotic Chern-Simons terms), and superstring theory (I am not sure with que Loop Quantum Gravity approach here), can, in principle provide a recipe to calculate a finite quantum gravity interaction between gravitons and gravitons and matter at any loop order. Why? The increasing number of Feynman diagrams and the mathematically complexity of non-linearity of GR makes the problem formidable. Likely, only a supercomputer or trained AI could manage to add all these diagrams in the most advanced theories and tell us if they are correct when contrasted with experimental data. But that is a future today.

Helicities of particles like gravitons or photons enter into the difficult calculations of quantum gravity or QFT. Locality, imposed by our preconceptions on space, time and field formulations due to Lorentz symmetry and causality, are solid in local QFT based on relativity and Quantum Mechanics (QM). For instance, a photon field can be coded into a vector field with polarization in form of plane waves:

    \[A_\mu=\varepsilon_\mu e^{ipx}\]

Transversality of the photon field, i.e.,

    \[\varepsilon_\mu\cdot p^\mu=0\]

plus gauge redundancy

    \[\varepsilon_ \mu\sim\varepsilon_\mu+\alpha p_\mu\]

    \[A_\mu\sim A_\mu+\partial_\mu\Lambda\]

implies a constraint

    \[\sum_i k_iP_i^\mu=0\]

for all equal k_i. That is, the equivalence principle somehow is telling us that, the whole structure of interactions, based on QM and special relativity, leads long distance interactions for spin 1 or spin 2 forces (electromagnetism and gravity!). Thus, it yields that, whatever the TOE is, relativity (special and general somehow) and QM (the SM somehow as well), they must emerge from it. In fact, the general structure of any Yang-Mills (YM) theory is pretty simple, it has an Y diagram form and three labels for any spin, s=0,1/2,1,3/2,2. We have found fundamental particles of spin 0, 1/2, 1 and 2 (0 with the Higgs, 2 if you count gravitational waves as gravitons). The only lacking fundamental particle is that with spin 3/2, a general prediction of supersymmetry (SUSY). Gravity is unique, at minimal sense, thanks to Einstein discovery of gravity as curved spacetime geometry. Of course, you could extend GR to include extra fields or gravitational degrees of freedom (massive gravitons, dilatons, graviphotons,…), but all of these have failed till now. SUSY has not been found, but it will be found in the future for sure. Black holes, indeed, in GR have some exotic supersymmetries, even in the simple Kerr case. That is not very known but it is true.

Moreover, move to the cosmological constant problem. Vacuum energy density problem if you prefer the alternative name for it. The fact that

    \[\Lambda_{observed}\sim 10^{-122}\Lambda_{theory}\]

have been known since the times where Einstein introduced the cosmological term. Worst, now we do know it is not zero, it makes the situation more unconfortable for many. Before 1998 you could simply argue that some unknown symmetry would erase the cosmological term of your gravitational theories. Now, you can not do that. Evidence is conclusive in which the cosmological term seems to be positive and non-null. Just having a theory, that allows for fantastically tiny cancellations, is just weird. Weirder if that cancellation is precise in 122 or lets say the falf 61 orders of magnitude. Such a fine tuning is disturbing. Ludicrous! Ridiculous. Anyway, this fact has not stopped theorists to make some guesses of how to life with that. One idea arised after the second string revolution, circa 1995. Brane worlds. Plug some damping curvature into the bulk of spacetime. The tension on a brane could just explain why gravity is weak, and maybe, explain why the cosmological constant is tiny. The problem is, that the mechanism is much more consistent with NEGATIVE cosmological constant. The DGP model gets 4D GR plus massless matter in an AdS (Anti-deSitter, negative cosmological constant Universe). In technical words, a negative tension provides a modified propagator in QFT solving the cosmological constant problem if this is negative (otherwise this blows up!). Another problem with brane worlds is that no causal modification can work on fields of the main brane Universe. Massive gravities, both in brane gravity and independent models, imply, on very general grounds, that long distance interactions should include SCALAR new degrees of freedom. Remember the Higgs mass problem I mentioned before? Well, you loose the control of scalars in theories without symmetries. In fact, the good and the evil of scalar degrees of freedom: at some point they introduce modifications or violations of either the Einstein equivalence problem (or some soft/hard variation), or the Lorentz symmetry behind it. Exciting news for experimentalists: you can seek these violations in designed experiments. Bad news for conservative theorists: nonlinear interactions can introduce conflicts with the usual conceptions and features of locality (even causality), thermodynamics or Lorentz symmetry expected from current well-tested theories. To save locality, the Einstein equivalence principle or usual properties of QM seems to be inevitable at some point (something also triggered by the yet unsolved black hole information paradox). Unless a loophole is found, it seems the combination of the current theories will imply that we must abandon some yet untouched principle. A toy model in DGP massive gravity, the so called galileon gravity lagrangian:

    \[L_s=\partial^2\phi+\dfrac{\partial^2\phi}{\Lambda^3}\square\phi+\cdots\]

It has a shift symmetry (galilean symmetry) given by transformations

    \[\partial_\mu  \phi\rightarrow \partial_\mu\phi+V_\mu\]

In the simplest Higgs phase of gravity, you would have a zero expectation value for the scalar field. However, radiative corrections to this vacuum are expected to arise. And we do NOT know how to handle with it.

20th century physics is made by the triumph of the current two pilar theories: relativity (special and general) and quantum mechanics. The apparent difficulties to get gravity into the quantum game is much a deep question, but it could allow us to explain the Big Bang, the Universe and the future of it. These difficulties, are also triggering doubts about the role and formulation of QM (annoying for many to accept, from philosophical reasons, more than experimental precision). Surely, quantum mechanics could be modified by a further TOE or GUT theory. However, it will remain true just as the Bohr toy model or newtonian mechanics. Why is our Universe big with big things in it? The reason is QM, or more precisely QFT. QFT=SR+QM. And it is true. Particles are classified as entities with mass (energy) and spin (times \hbar). Experimentally, with the exception of spin 3/2, we have found every particle (GW counted as graviton wavepackets) till spin 2 (spin zero is the Higgs particle). Why not spin 5/2 or 3? Well, there are higher spin theories. They have other issues, with locality or their definition as interacting field theories (excepting some special theories as those by Vasiliev). The simple Y form of Feynman graphs in known theories is particularly striking and simple (beyond some technicalities due to the well-defined processes of regularization and renormalization of physical quantities, that some one should study better at these crisis times?). However, the structure of the Y shape interaction of the Higgs field CANNOT be investigated in the LHC very well. It is a nasty hadron collider. We will need a linear or muon collider. Or a circular collider adjusted to the resonance of the Higgs particle to study his self-interactions (in the standar model, the Higgs potential is simply a cubic plus quartic potential). Otherwise, a 100TeV collider would be better for this as well. A 100 TeV collider would probe vacuum fluctuations of the Higgs field itself (or the muon collider or any other special collider tuned to the Higgs).

There are some critics towards the waste of money those machines would be. Or course, there is no guarantee that we will find something new. But the Higgs particle interactions MUST be studied precisely. The universe is surprisingly very close to Higgs field metastability. It is something that well deserve the money, perhaps, I would only complain about not doing this crazily. We need to plan the Higgs potential study further. However, note that the LHC is about 10TeV, the future colliders will be clean lepton (or photon!) colliders tuned to the Higgs resonance or 100TeV/1000TeV (the latter in my lifetime I wish to see it) and those energies are yet much smaller than Planck energy, 10^{16} TeV. Neutrino physics, gamma rays, likely X-rays and radio bursts or gravitational wave astronomy can probe surely strong gravity and extreme processes much better. For free. Of course, you have to be good enough to catch those phenomena and that, again, cost time and money.

String theory news…But firstly, a little bit history. Strings were discovered in the context of S-matrix theory and strong interactions. The Veneziano amplitude was the key to find out that string theory has something to do with the nuclear realm (despite this is surely ignored by quantum stringers right now, or not so appreciated as 30 years ago!). String amplitudes of four point particles has a simple structure:

(3)   \begin{equation*}A_s=\dfrac{<12>^4\left[34\right]^4}{stu}\mathcal{C}\end{equation*}

and the general amplitude takes the form

(4)   \begin{equation*}\mathcal{A}=\dfrac{<12>^4\left[34\right]^4}{stu}\Pi_{i=1}^\infty\dfrac{(s+i)(t+i)(u+i)}{(s-i)(t-i)(u-i)}=\dfrac{<12>^4\left[34\right]^4}{stu}\dfrac{\Gamma(-s)\Gamma(-t)\Gamma(-u)}{\Gamma(1+s)\Gamma(1+t)\Gamma(1+u)}\end{equation*}

Here, the s, t, u are variables encoding the energies of the colliding strings in certain frame, \Gamma is the gamma function of Euler, a generalization of the factorial function for real AND complex values. The numbers between brackets and powers are certain spinor/vector quantities coding helicities. Well, take weak coupled string theory amplitudes, and 4 point interactions at tree level (no loops in the classical sense), independently of compactification you can get a wonderful universal formulae for gravity and YM amplitudes:

1st. For gravity, you get:

(5)   \begin{equation*}\mathcal{A}_G=\dfrac{<12>^4\left[34\right]^4}{stu}\dfrac{\Gamma(-s)\Gamma(-t)\Gamma(-u)}{\Gamma(1+s)\Gamma(1+t)\Gamma(1+u)}\times \mathcal{K}\end{equation*}

where\mathcal{K} is

    \[\mathcal{K}=-\dfrac{1}{stu}\]

for Type (II) strings at the level of three gravitons interaction, and

    \[\mathcal{K}=-\dfrac{1}{stu}+\dfrac{1}{s(1+s)}\]

for heterotic strings at the level of 2 graviton-scalar interaction, and

    \[\mathcal{K}=-\dfrac{1}{stu}+\dfrac{8}{(1+s)s}-\dfrac{tu}{s(1+s)^2}\]

for bosonic strings at the level of 2 graviton-scalar interaction. Note the universal pole structure of the formulae above.

    \[\mathcal{C}=\dfrac{\Gamma(-s)\Gamma(-t)\Gamma(-u)}{\Gamma(1+s)\Gamma(1+t)\Gamma(1+u)}\]

2nd. YM theory in string theories have the following 4 point tree level amplitudes:

(6)   \begin{equation*}\mathcal{A}_G=<12>^2\left[34\right]^2\dfrac{\Gamma(1-s)\Gamma(1-t)}{\Gamma(1+s)\Gamma(1-s-t)}\times \mathcal{K}'\end{equation*}

where\mathcal{K}' is

    \[\mathcal{K}'=-\dfrac{1}{st}\]

for Type (I) strings at the level of three gravitons interaction, and

    \[\mathcal{K}=-\dfrac{1}{st}-\dfrac{u}{s(1+s)}\]

for bosonic strings at the level of 2 graviton-scalar interaction, and

    \[\mathcal{K}=-\dfrac{1}{stu}+\dfrac{8}{(1+s)s}-\dfrac{tu}{s(1+s)^2}\]

for bosonic strings at the level of 2 graviton-scalar interaction. Heterotic theories would have massive pole structures in the similar expression. The pole structure (beyond the massive corrections in the heteorica case) arises from

    \[\mathcal{C}'=\dfrac{\Gamma(-s)\Gamma(-t)}{\Gamma(1-s-t)}\]

These gamma functions are related to the Euler beta function and can be seen as generalizations of the famous Veneziano amplitude who gave birth to string theory in the 70s or the 20th century. In fact, the S-matrix programme can be approached for any mass and spin. The string theory “magic” procedures with only massless states (external) and playing with another interactions or deformations of the above formula give rise to some open problems (some of them, known from the old string theory times).

For external interactions of 3 point interactions, at tree level, you get an Y-amplitude

    \[g\varepsilon^{\mu_1\mu_2\cdots \mu_N}(p_1-p_2)_{\mu_1}\cdots (p_1-p_2)_{\mu_N}\]

For 4 point amplitudes Y+Y at tree level, you get

    \[\dfrac{gg'}{s-M^2}G_N^{(d)}\left[\cos\theta\right]\]

where G_N^{(d)}(\cos\theta) are Gegenbauer polynomials, G_0=1, G_1=x, G_2=dx^2-1, that arise in the expansion of the fraction (newtonian like force) in d-dimensions:

    \[\dfrac{1}{\vert z-r\vert^{d-2}}=\sum_N r^NG_N^{(d)})\left[\cos(\theta)\right]\]

These remarkable formulae link with the Veneziano amplitude:

(7)   \begin{equation*}A_V=\dfrac{\Gamma(-1-s)\Gamma(-1-t)}{\Gamma(-2-s-t)}\end{equation*}

It has a striking pole structure, with residua at s=-1, s=0, s=1,…, s=N; or equivalently at 1, t+2, (t+2)(t+3),...,(t+2)\cdots(t+N+2). It yields the residue at s=1 provides

    \[\mbox{Res}(t+2)(t+3)=\dfrac{25}{4}\left(\cos^2\theta-\dfrac{1}{25}\right)=\dfrac{25}{4}\left(\cos^2-\dfrac{1}{d}\right)+\dfrac{25}{4}\left(\dfrac{1}{d}-\dfrac{1}{25}\right)\]

This expresion is POSITIVE as you keep d\leq 25, a fact known from bosonic string theory (living in 25+1 spacetime). Similarly, the open superstring amplitude

    \[A(1^-2^-3^+4^+)=<12>^2<34>^2\dfrac{\Gamma(-s)\Gamma(-t)}{\Gamma(1-s-t)}\]

has an analogue residue at s=3 (levels 1,t+1,(t+1)(t+2),...,(1+t)\cdots(N-1+t) corresponding to s=1,..,s=N) and you get

    \[\mbox{Res}_{s=3}(t+1)(t+2)=\dfrac{9}{4}\left(\cos^2\theta-\dfrac{1}{9}\right)=\dfrac{9}{4}\left(\cos^2-\dfrac{1}{d}\right)+\dfrac{9}{4}\left(\dfrac{1}{d}-\dfrac{1}{9}\right)\]

that is OK iff d\leq 9. You can enforce positivity to every level as well! But a price is to ensure it. You have to pay a d=2 conformal setting (only known theory to do that is string theory!). If you want a ghost-free theory, positive amplitude theory, such as

    \[a=\sum_k c_k\cos (k\theta)\]

remains positive with c_k>0 you have to live in d=2 dimensions (string theory magic do that, in the abstract worldsheet!). However, this is hard statement as c_k become increasingly exponentially small, but adding a \cos \theta-1 factor makes it false! The positivity of these amplitudes and the analysis of the hidden symmetry structure of the string diagrams have revealed a “jewel” or hidden geometric object in quantum mechanics/string theory/quantum field theory. How? The residue structure of gravity amplitudes are correlated to open superstrings:

    \[ \mbox{Res}_N^{Gravity}(\cos\theta)=\left(\mbox{Res}_N^{OpenS}(\cos\theta)\right)^2\]

This is a new hint of the Gravity=YM^2 mantra of these times, but there is more. Departure of positivity seems to indicate non consistent theories. Positivity magic struggles with massive higher spin states, where problems really live too. What higher spin amplitudes should we include and what to exclude? Not easy task. The idea is that we should search for a way to understand higher spin amplitudes without the worldsheet picture as primary entity (that d=2 restriction is hard for practical purposes in the real world). There is another reason, and that is gravity. Gravity is, from certain viewpoint, more positive than open superstrings (note the power in the amplitude coming from the spinors/polarizations). Here it comes, the amplituhedron. What is the amplituhedron? Well, it is a new object encoding the positivity of amplitudes in QFT and string theory. A formal definition is something like this:

    \[\boxed{\mathcal{M}_{n,k,L}\left[Z_a\right]=\mbox{Vol}\left[\mathcal{A}_{n,k,L}\left[Z_a\right]\right]}\]

What is this? Well, roughly speaking, the amplituhedron is certain generalized polytope (higher dimensional polyhedron) in projective geometry (technically, a generalization of the positive grassmannian) such as its volume is the all-loop scattering amplitude of particle physics processes. There, n is the number of vertices (or particles) interaction, k is the plane dimensionally specifying the helicity structure of the particles, and L is the loop order. Certainly, that amplitudes are lower dimensionally projected shadows of higher dimensional, maybe discrete, structures is a powerful language. I will talk about the amplituhedra and related stuff in the next posts.

Finally, to end this long boring post, let me review some of the magic. The amplitude with no negative probability (positive residue!) for gravity and massive particles reads off:

(8)   \begin{equation*}\mathcal{A}=G_N\dfrac{<12>^4<34>^4}{stu}\dfrac{\Gamma(1-\frac{s}{M^2})\Gamma(1-\frac{t}{M^2})\Gamma(1-\frac{u}{M^2})}{\Gamma(1+\frac{s}{M^2})\Gamma(1+\frac{t}{M^2})\Gamma(1+\frac{u}{M^2})}\end{equation*}

However, emphasis is usally done in the spacetime picture of strings, instead of the amplitude structure inherited from S-matrices! The equation of a superstring is usally given by

    \[\partial_\tau^2 X^\mu(\sigma,\tau)-\partial_\sigma^2 X^\mu(\sigma,\tau)\]

The so-called Green-Schwarz theory contains a Super-Yang-Mills (supersymmetric extension of YM) theory with action

(9)   \begin{equation*}S=\int\left(-\dfrac{1}{4}Tr F^2+i\overline{\Psi} \Gamma \cdot D\Psi\right) dvol\end{equation*}

String theory has a problem. It yields to too many consistent vacua for the Universe. Our SM+GR world is only one betwen 10^{500} in general superstring models, or 10^{272000} F-theory (10+2) different universes. These are Universes very similar to ours (or different), differing in coupling constants and vacuum expectactions values no hint of how to selec. This is the string theory nasty trick. There is no adjustable parameter, but you are driven to accept there are plenty of Universes. To my knowledge, no one has even proved that our constants and field theory expectation values can be derived from one of those Universes in clear plain way. However, there are no too many consistent quantum theories of gravity in the market…

For instance, the (quantum) supermembrane allows you get a bosonic equation (free) of motion given by

(10)   \begin{equation*}\partial_i\left(\sqrt{-g}g^{ij}E^\mu_j\right)=0\end{equation*}

and where

    \[g_{ij}(X)=\eta_{\mu\nu}\partial_iX^\mu\partial_jX^\nu=E^\mu_iE^\nu_j\eta_{\mu\nu}\]

Going to superspace Z=(X,\theta) membranes, the supermembrane equation gets modified

(11)   \begin{equation*}\partial_i\left(\sqrt{-g}g^{ij}E^\mu_j\right)=\varepsilon^{ijk}E_i^\nu\partial_j\overline{\theta}\Gamma^\mu_\nu\partial_k\theta\end{equation*}

SUSY and coherence of the theory in minkovskian(lorentzian) signature force you to match bosonic and fermionic degrees of freedom for branes, i.e., N_B=N_F, such as for a p-brane in D-spacetime (generally lorentzian, giving up this allows you to go beyond eleven dimensions), you have:

    \[N_B=D-p-1\]

    \[N_F=\dfrac{MN}{4}\]

where M is the number of fermionic degrees of freedom, and N is the number of supersymmetries on the superspace target. From this simple equation, you can easily derive that

    \[D-p-1=\dfrac{MN}{4}\]

Take any 1-brane, so you get D-2=\dfrac{MN}{4}. If you impose N=1 SUSY, you get superstring theory with M=4 generators (fermionic d.o.f.) in D=3, you get superstring theory with M=8 generators in D=4, M=16 generators in D=6 and M=32 generators in D=10. You can play with N=2, N=4 and N=8 supersymmetries in these dimensions too.

See you in the next amplituhedron post!

LOG#210. Weak gravity conjectures.

 

Hi, everyone! I’m sorry again for the time lapse, but I required to. To secure my current job (as High School teacher), and well, to fight against doubts about my health. Thank you for your patience!

Gravity is weak (at least on our brane!). Force carriers and couplings tell us this:

Are you weak? Well, I know, I know…Weak or strong depends on the context and the regime you are. Maybe…But why quantum gravity matters here? Well, we want to unify gravity with (Yang-Mills) gauge theories. It is hard.

From general perturbative arguments of quantum field theory (QFT), every theory can be expanded around 1/\Lambda. However, the so-called semiclassical computation is expected to break down at some point, due to our ignorance about the true degrees of freedom of gravity and quantum space-time at the microworld. For instance, a classical similar phenomena does happen in the Standard Model (SM). The SM electron is not “very light” or too heavy. The renormalized photon coupling near the electron, and here is where the Weak Gravity Conjecture (WGC) enters, where the scale eM_P, the electron charge times the Planck mass (natural units are selected here), differs up to order ne from the IR low energy theory given by classical QFT. That is, what is the WGC? Simply, it is the statement that in any gauge theory coupled to gravity, you can not get arbitrary mass or coupling but you are confined to have a bound of the following form (here I suppose the above U(1) gauge theory):

(1)   \begin{equation*}\Lambda \leq eM_P\end{equation*}

Here, the e is the U(1) gauge charge, the electric or more generally speaking the abelian gauge coupling (hypercharge).

Why is this important? Well, the keyword in the mood these days is called: emergence. Gauge theories (and even GR) are expected to be emergent or derived from a more fundamental framework (string theory is such a framework, but also supergravity or supersymmetric theories). Then, the WGC takes a more general form. The smallness of the gauge couplings at low energies (IR, infrared) is caused (likely) by heavy particles in the UV (high energy, short distance, regime). This is sometimes referred as IR/UV “entanglement” but it is a sort of duality like others discovered since 1995, the second superstring revolution. Now, there is a conjecture, by Harlow, in the form of WGC:

“The WGC of any emergent gauge fields is mandatory to enforce factorizability of Hilbert space in quantum gravity (QG) with multiple asymptotic symmetries”.

Here,  let me highlight the complementarity between the UV-cutoff scale \Lambda and the IR-regulator expected from field theory. This is the origin of the above bound in mass-charge and its generalizations, that I am going to discuss a little bit.

Generalization one. Extra dimensions and WGC bounds.

Suppose you formulate (D+1) gravity and compactify some extra dimension on a circle. Then, you get the celebrated relationship

(2)   \begin{equation*} M_P^{D-2}(D)=2\pi R M_P^{D-1} (D+1)\end{equation*}

By the other hand, gauge theory on the circle provides a KK charge coupling

(3)   \begin{equation*}\dfrac{1}{e_{KK}}=\dfrac{1}{g^2_{YM}}=\pi R^3M_P^{D-1}(D+1)\end{equation*}

From these, and the hypothesis that the UV scale is less or equal to the Planck scale, i.e., \Lambda\leq M_P(D+1), counting degrees of freedom you obtain something like this

(4)   \begin{equation*} \Lambda_{QG}^{D-2}\leq N_{dof} M_P^{D-2}(D)\end{equation*}

and where the number of degrees of freedom with mass BELOW the quantum gravity scale can not exceed the value m\sim eM_P in the U(1) case, such as

    \[N(\Lambda)\geq \dfrac{\Lambda}{eM_P}\]

implies that

    \[M_P^2\geq N(\Lambda_{QG})\Lambda^2_{QG}\]

and thus

    \[M_P^2\geq \dfrac{\Lambda}{eM_P}\Lambda^2\]

or

    \[M_P^2\geq\dfrac{\Lambda^3_{QG}}{eM_P}\]

and finally

    \[\boxed{\Lambda_{QG}\leq e^{1/3}M_P}\]

In fact, a more precise bound would be

    \[\boxed{\Lambda_{U(1)}\leq (ke)^{1/3}M_P}\]

since our calculation involved a nasty trick of gauge KK into the gravity realm of the bulk. Even more surprisingly, in any dimension, loops and UV cut-offs allow us to generalized that into

(5)   \begin{equation*} \Lambda_{U(1)}\sim \left(eM_P^{3\frac{D-2}{2}}\right)^{1/D-1}\end{equation*}

and the scale of gravity fixes into

(6)   \begin{equation*}\lambda_g(E)=G_NE^{D-2}\sum \mbox{dim}R_i\end{equation*}

and where the sum extends on any dimension of gauge representations for the i-particle into the E-configuration after compactification or the cut-off regularization. Again, M_P^{D-2}\geq N_{dof} \Lambda^{D-2} and now:

-For U(1) G_Nm^2\leq c_{WGC} k^2e^2n^2, where k is the lattice spacing, and n is some large quantum number, such as

    \[n_{max}\leq \dfrac{\Lambda_{QG}}{ekM_P^{(D-2)/2}}\]

and therefore, the scale of quantum gravity is bound in the form

    \[\Lambda_{QG}\leq (ek)^{1/D-1}M_P^{3(D-2)/2(D-1)}\]

In fact. the requirement to avoid the Landau pole in U(1) also imply a similar bound, provided

    \[\lambda_{gauge}(E)\sim e^2E^{D-4}\sum (kn)^3\sim e^2k^2E^{D-4}\left(\dfrac{E}{ekM_P^{(D-2)/2}}\right)^3\]

You can also generalize all this for non abelian gauge theories in any dimension!!!!! For a SU(2) in D-dimensions, you obtain

(7)   \begin{equation*}\Lambda_{QG}\leq k^{1/D}g^{2/D}M_P^{2(D-2)/D}\,\forall D\end{equation*}

You can easily evaluate for our 4D world to get

    \[\Lambda\leq k^{1/4}g^{1/2}M_P\]

For larger groups, as SU(3), the formula is more complicated but after group theory and mathematics you can obtain

    \[\Lambda_{QG}\leq k^{2/(D+3)}g^{5/(D+3)}\left(M_P^{7( D-2)/2(D-3)}\right)\]

and for 4D

    \[\Lambda_{QG}\leq k^{2/7}g^{5/7}M_P\]

The gauge coupling for the SU(2) and SU(3) groups also follow up these calculations

    \[\lambda_g(E,SU(2))\sim\dfrac{E^D}{g^ 2kM_P^{2(D-2)}}\]

and

    \[\lambda_g(E,SU(3))\geq g^2E^{D-4}\dfrac{(E/gM_P^{(D-2)/2})^7}{k^2}\]

For a general non-abelian gauge group G, the two mail formulae for the WGC are the following

(8)   \begin{equation*}\boxed{\Lambda_{QG}\leq k^{\frac{r_G}{n_G+D-2}}g^{\frac{n_G}{n_G+D-2}}M_P^{\frac{n_G+2}{n_G+D-2}\left(\frac{D-2}{2}\right)}}\end{equation*}

(9)   \begin{equation*}\boxed{\lambda_g(E)\geq g^2E^{D-4}\dfrac{}Q_{max}^{r_G+L_G+2}{k^{r_G}}\sim\dfrac{E^{n_G+D-2}}{k^{r_G}g^{n_G}M_P^{(n_G+2)(D-2)/2}}}\end{equation*}

These equations and formulae have the following meaning in D-dimensions: they provide a bound betwen the mass of the lightest particles and the heaviest, and provide a relationship betwen the range of the group r_G, the lattice spacing L_G, as a function of the range plus half of the roots of the groups. Note that for SU(N), you have d_G=N^2-1, r_G=N-1, and that you also get the asymptotic limit for “big numbers”(r_G<<d_G)

    \[\lim_{n\rightarrow\infty}\Lambda_{QG}=gM_P^{(D-2)/2}\]

as usual from compatification scenarios and other brane worlds or BSM theories. Indeed, for instance, in a KK scenario in SU(2)xU(1), you would get something like this:

    \[M^2(j,q)\leq (q^2j^2+e^2q^2)M^{(D-2)}_P\]

and

    \[N(E)\sim\dfrac{1}{eg^2}\left(\dfrac{E}{M_P^{(D-2)/2}}\right)^3\]

with

    \[\Lambda_{QG}\leq e^{1/D+1}g^{2/(D+1)}M_P^{\frac{5}{2}\frac{D-2}{D+1}}\]

The general gauge-gravity unification, supposing n=\sum_i^Dn_i, n_i=r_i+l_i, implies that the lightes mass can not be arbitrary and are related to the higher energy maximal mass in such a way that

(10)   \begin{equation*}\boxed{\Lambda_{QG}\leq (\mbox{det}\tau)^{-1/2}(\Pi_i g_i^{n_i})^{1/(n+D-2)}M_P^{\frac{(n+2)(D-2)}{2(n+D-2)}}}\end{equation*}

(11)   \begin{equation*}\boxed{\lambda_i\geq \dfrac{(det\tau)^{1/2}E^{n+D-2}}{\Pi_i g_i^{n_i}}M_P^{(n+2)\frac{(D-2)}{2}}}\end{equation*}

In summary, gauge-gravity unification IMPLIES under very general grounds the Weak Gravity Conjecture:

(12)   \begin{equation*}\boxed{\Lambda^2_{gauge}\leq e^2<q^2>_\Lambda M_P^{D-2}}\end{equation*}

Generalization two.String theories and WGC bounds.

What about string theory? It yields that string theory as a generalized KK theory also contains a genberalized form of the WGC as well! In the weak coupling limit, any stringer has derived in some moment something like this

    \[\lambda_g(E)\sim g_s^2\dfrac{e^{2\pi(2+sqrt{2})\sqrt{N}}}{N^{3/2}}\]

for N=E^2\alpha'/4 for gravity, and

    \[\lambda_G(E)\sim g_s^2\dfrac{e^{2\pi(2+sqrt{2})\sqrt{N}}}{N^{2}}\]

for N=E^2\alpha'/4 for gauge theory in the background. Also, you probably know that

    \[G_N\sim g_s^27M_s^8\]

in 10d heterotic string, and

    \[g^2_Y\sim g_s^2/M_s^6\]

This implies that

    \[\lambda_g\sim\left(\dfrac{E}{M_s}\right)^8\]

for gravity, and

    \[\lambda_G\sim g_s^2\left(\dfrac{E}{M_s}\right)^6\]

for gauge theory below the string scale up to numerical constants. Moreover, below strings, \lambda(gravity)<<\lambda(gauge), but at the string scale we would have \lambda(gravity)\sim,\lambda(gauge), at least if g_s<<1. For a p-dimensional torus compact manifold:

    \[G_N\sim\dfrac{g_s^2}{M_s^2R_1\cdots R_p}\]

    \[g^2_Y\sim \dfrac{g_s^2}{M_s^6R_1\cdots R_p}\]

    \[g_i^2\sim\dfrac{g_s^2}{M_s^8R_1\cdots R_p}\]

where g_i is the KK photon coupling to the compact R_i circle. Therefore, you can derive the following results:

    \[\lambda_{gravity}\sim\dfrac{g_s^2E^{8-p}}{M_s^8R_1\cdots R_p}\]

    \[\lambda_{gauge}\sim \dfrac{g_s^2E^{6-p}}{M_s^6R_1\cdots R_p}\]

and then

    \[\lambda(gravity)\sim g_s^2\left(\dfrac{E}{M_s}\right)^8\]

    \[\lambda(gauge)\sim g_s^2\left(\dfrac{E}{M_s}\right)^6\]

    \[\lambda(KK,i)\sim g_s^2\left(\dfrac{E}{M_s}\right)^8\]

Similarly, lightly different but analogous, in Type I strings you have

    \[G_N\sim \dfrac{g_s^2}{M_s^8}\]

    \[g^2\sim\dfrac{g_s}{M_s^6}\]

Note that the gauge coupling is the only difference with respect to the heterotic case. Then, you proceed and get

    \[\lambda(gravity)\sim g_s^2\left(\dfrac{E}{M_s}\right)^8\]

    \[\lambda(gauge)\sim g_s^2\left(\dfrac{E}{M_s}\right)^6\]

Therefore, \lambda(gauge)\sim g_s>>\lambda (gravity)\sim g_s^2 in Type I strings, and \lambda (gravity, M_s)\sim g_s^2. Comparing both cases, you will note surely that \lambda (gauge,open strings,M_s)\sim g_s, and that \lambda (gauge, closed string, M_s)\sim g_s^2.

What about D-branes? Well, they are also included. D-branes are “heavy” particles in general. Take IIA 10d superstring theory. And take that

    \[g^2\sim \dfrac{1}{M_s^6}\]

    \[M_P\sim\dfrac{M_s^8}{g_s^2}\]

For a D-brane, you get that

    \[gM_P^4\sim \dfrac{M_s}{g_s}>>M_s\]

For a D0-brane, gM_P^4\sim M_s/g_s there is no change below \Lambda_{QG}\sim M_s, and thus, the relationship

    \[eM_P^{(D-2)/2}\geq \Lambda_{QG}\]

also holds for Dp-branes!!!!!!

Conclusion: any reasonable form of gauge-gravity unification implies a WGC and a limit of the number or masses of the lightest species as function of the heaviest ones! GUT, string theory or any other BSM seems to be bound through quantum gravity!!!!!! Equivalently, it seems that quantum gravity limits the number of particle species and degrees of freedom (or masses) of particle through the Planck mass, the number of dimensions and the gauge couplings. Yet, in other form, gravity is weak because unification with gauge theories limits the mass to charge ration between particles.

Remark: the WGC has surely implications in black hole physics. Extremal black holes are likely unstable (or non-existent), and they decay into charged particles. It could have experimental signatures…

Well, more is to come soon. Indeed, I have some news to give you in brief (provided my health doesn’t collapse again), and big news are happening for this blog. Probably a change of format, and well, maybe I will end posting with this style at about log#300. Life is changing and moving, and other change, like the one I did when I acquired this domain years ago are to begin when I prepare the infrastructure I am eager to own. By the way, you can follow me at my instagram @riemannium.

See you in another new blog post!!!!!!!!