LOG#223. Pi-logy.

Hi, there.

Today some retarded Pi-day celebration equations (there is a longer version of this, that I wish I could publish next year). Some numbers and estimates for pi-related equations:

1st. Hawking radiation temperature (Schwarzschild’s 4d black hole case).

(1)   \begin{equation*}T_H=\dfrac{\hbar c^3}{8\mathbf{\pi} G_NMk_B}=6.2\cdot 10^{-8}\left(\dfrac{M}{M_\odot}\right)K\end{equation*}

2nd. Schwarzschild black hole surface area (4d).

(2)   \begin{equation*}4\mathbf{\pi}R_S^2=\dfrac{16\mathbf{\pi}G_N^2M^2}{c^4}=1.1\cdot 10^8\left(\dfrac{M}{M_\odot}\right)^2\end{equation*}

3rd. Black hole power/luminosity (4d).

(3)   \begin{equation*}L_{BH}=P_{BH}=\dfrac{\hbar c^6}{15360\mathbf{\pi}G_N^ 2M^2}=9.0\cdot 10^{-29}\left(\dfrac{M_\odot}{M}\right)^2W\end{equation*}

4th. Black hole evaporation time (4d).

(4)   \begin{equation*}t_{e}=\dfrac{5120\mathbf{\pi}G^2_NM_0^3}{\hbar c^4}=8.41\cdot 10^{-17}\left(\dfrac{M}{1kg}\right)^3s=6.6\cdot 10^{74}\left(\dfrac{M}{1kg}\right)^3s=2.1\cdot 10^{67}\left(\dfrac{M}{1kg}\right)^3yrs\end{equation*}

5th. Time to fall off and arrive to the BH singularity with negligible test mass (4d).

(5)   \begin{equation*}t_f(test)=\dfrac{\mathbf{\pi}}{2c}R_S=\dfrac{\mathbf{\pi}G_NM}{c^3}=1.5\cdot 10^{-5}\left(\dfrac{M}{M_\odot}\right)s\end{equation*}

6th. Time to fall off and arrive to the BH singularity with E=m test mass (4d).

(6)   \begin{equation*}t_f(m)=\dfrac{2}{3}\dfrac{R_S}{c}=\dfrac{4\mathbf{\pi}G_NM}{c^3}=6.2\cdot 10^{-5}\left(\dfrac{M}{M_\odot}\right)s\end{equation*}

7th. Black hole entropy (4d) value in SI units.

(7)   \begin{equation*}S=\dfrac{k_B c^3}{G_N\hbar}A_{BH}=\dfrac{k_BA}{4L_p^2}=\dfrac{4\mathbf{\pi} GM^2}{\hbar c}=\dfrac{\mathbf{\pi}k_Bc^3A_{BH}}{2G_Nh}=1.5\cdot 10^{54}\dfrac{M^2}{M_\odot^ 2}J/K\end{equation*}

8th. M2-M5 brane quantization.

(8)   \begin{equation*}T_{M2}T_{M5}=\dfrac{2\mathbf{\pi}N}{2k_{11}^2}=\dfrac{\mathbf{\pi}N}{k_{11}^2}\end{equation*}

9th. Gravitational wave power or GW luminosity.

    \[L_{GW}=-\dfrac{dE}{dt}=\left(\dfrac{32}{5c^5}\right)G^{7/3}\left(M_c\pi f_{GW}\right)^{10/3}\]

where the gravitational wave frequency is


10th. Chirp frequency or frequency rate.

For circular orbits, you have


11th. Coalescence time for GW merger (circular orbits).


12th. ISCO (inner stable circular orbit) frequency for binary mergers.

    \[f_{max,c}=f_{isco}=\dfrac{c^3}{6^{3/2}\pi GM}\approx 4.4\dfrac{M}{M_\odot} kHz\]

13th. S-matrix in D-dimensions.


14th. Gravitational wave fluxes for gravitons and photons (4d).

    \[F_{GW}=\dfrac{c^3h^2\omega^2}{16\pi G_N}=\dfrac{\pi c^3h^2f^2}{4G_N}\]

where h is the GW strain, and for photons, the GW induced electromagnetic  flux reads

    \[F_{em}=\dfrac{c^3\omega^2 h^4}{8\pi G_N}=\dfrac{\pi c^3 f^2 h^4}{2G_N}\]

15th. Kerr-Newmann black hole area and mass spectrum.

Any massive, rotating, charged black hole have an event horizon given by the following formula


This relation can be inverted to obtain the mass spectrum as function of area, charge and angular momentum as follows (exercise!):


Challenge: modify the above expressions to include a cosmological constant factor.

16th. Universal quantum gravity potential at low energies.

Quantum gravity at low energy provides the following potential energy

    \[V_{QG}=-\dfrac{GM_1M_2}{r}\left[1+\dfrac{3G_N\left(M_1+M_2\right)}{rc^2}+\dfrac{41G_N\hbar}{10\pi r^2}\right]\]

independent of the QG approach you use!

17th. Running alpha strong.


For the general QCD the beta factor reads


and the SM gives \beta_0>0 (N_c=3, n_f=6) and slope \beta(\alpha_s)<0 due to asymptotic freedom (antiscreening).

18th. Graviton energy density and single graviton energy density.

The graviton energy density reads off from GR as

    \[\rho_E=\dfrac{c^2\omega^2f^2}{32\pi G_N}\]

and for a single graviton, it reads

    \[\rho_E(single)=\dfrac{\hbar \omega^4}{c^3}=\dfrac{8\pi^3 h f^4}{c^3}\]

where h is the Planck constant, not the strain here.

I have many other pi-logy equations, but let me reserve them for a future longer post!

See you all, very soon!

LOG#222. The New SI.

By the time we will find new physics, we have already redefined the SI in terms of base units and fundamental constants.

The definition of the new SI is the next one: the SI is the system in which the following constants are taken to be exact

  • The unperturbed ground state hyperfine splitting (transition) frequency of the caesium-133 atom \Delta f(Cs-133) is exactly

    \[\Delta f=9192631770 Hz\]

Thus, frequency is fundamental, and the time is a base unit from frequency. One second is the time

    \[1s=\dfrac{9192631770}{\Delta f}\]

and 1 Hz is the reciprocal of the above quantity, exactly too.

  • The speed of light in vacuum c is exactly the quantity

    \[c=299792458m\cdot s^{-1}\]

Using the previous and this definition, you can define the meter to be exactly the amount of length

    \[1m=\dfrac{9192631770c}{299792458\Delta f}\]

  • The elementary charge e is exactly the quantity

    \[e=1.602176565\cdot 10^{-19}\]

Thus, the old electric current unit, base unit, the ampère, is the unit in which you can express charge into current, in corresponding units, with the next conversion constants:

    \[1C=\dfrac{e}{1.602176565\cdot 10^{-19}}=6.241509343\cdot 10^8e\]

    \[1A=\dfrac{1C}{1s}=\dfrac{6.241509343\cdot 10^8e\Delta f}{9192631770}=\dfrac{e}{(1.602176565\cdot 10^{-19})(9192631770)}\]

  • The Planck constant is exactly defined to be

    \[h=6.62607015\cdot 10^{-34}J\cdot s\]

and thus the kilogram is defined in terms of fundamental constants as

    \[1kg=\dfrac{(299792458)^2h\Delta f}{(6.62607015\cdot 10^{-34})(9192631770)c^2}\]

  • The Boltzmann constant is exactly

    \[k_B=1.3806488\cdot 10^{-23}J\cdot K^{-1}\]

so (with 1J=1kg\cdot 1m^2\cdot 1s^{-2} and then

    \[1K=\dfrac{1.3806488\cdot 10^{-23}}{k_B}=\dfrac{1.3806488\cdot 10^{-23}h\Delta f}{(6.62607015\cdot 10^{-34})(9192631770)k_B}\]

  • The Avogadro constant is defined exactly to be

    \[N_A=6.02214129\cdot 10^{23}\]

so the mole is

    \[1mole=\dfrac{6.02214129\cdot 10^{23}}{N_A}\]

  • The luminous efficacy K_{cd} of monochromatic radiation of frequency 540 THz is exactly defined to be 683 lumen/(W\cdot sr). Thus, as stereradian is dimensionless, lm=cd\cdot sr and the candela definition holds exactly to be as well

    \[1cd=1\dfrac{lm}{683W\cdot sr}=\dfrac{K_{cd}}{683}kg\cdot m^2\cdot s^{-3}\cdot sr^{-1}=\dfrac{1\cdot \Delta f^2 hK_{cd}}{(9192631770)^2(6.62607015\cdot 10^{-34})683}\]

See you in another blog post!

LOG#221. Kepler+Cosmic speeds.

Johannes Kepler
Kopie eines verlorengegangenen Originals von 1610

Hi, there! Today some Kepler third law stuff plus cosmic speed calculations and formulae. Cosmic speed sounds cool…But first…

From High School, you surely calculated how fast is  A PLANET moving around the sun (or any star, indeed). To simplify things, take units with G_N=4\pi^2 for a moment (nasty trick that works!). Kepler third law reads


where M=M_\star+M_p is the total mass, sum of the star mass and the planet star, a is the major semiaxis of the ellipse and T is the period of the motion. Note that for binary systems like binary stars, you can not neglect the M_p term since it is comparable to the higher mass. Simple calculus, let you obtain


Have you ever asked yourself what is the STAR speed? Generally speaking, the star is NOT static either in gravitation! So, forget that picture on your head telling you that the sun is fixed, it also moves. What is the star speed? It shows that you can compute easily the star speed with the aid of the conservation of linear momentum. Linear momentum p=mv is conserved, due to translational invariance in 3d space, and thus,


Set the constant to 0, and take the modulus, so you can see now that


Then, you get


or equivalently


In our units with G_N=4\pi^2, speeds are indeed measured in AU/yr! Now, you can not only calculate the speed of Earth around the sun, you can indeed calculate the speed of the sun around the Earth. You can extend this argument and calculate the difference between the planet speed, the star speed and the center of mass speed. It is a quite pedagogical exercise! In fact, there are two extra corrections to the abouve formulae in the general setting of celestial mechanics: you must include the effect of eccentricity and the inclination of the system with respect to the observer. The above formulae suppose you look perpendicular to the system, and the eccentricity is small or zero. If you use standard G_N SI units, you would get instead

    \[v_{p}=\left(\dfrac{2\pi GM}{T}\right)^{1/3}\]


and generally

    \[P_{CM}=\dfrac{m_pv_p+M_\star V_\star}{M}\]

and normally you choose P_{CM}=0 for convenience, but it can also be calculated with respect to the planet or star frames!

Angular speed in Kepler law (or velocities/speeds) are related to the space dimension in space-time! Thus, in D=d+1 space-time you would get angular speeds


and periods would scale as R^{d/2}. Moreover, if you are orbiting a star but a GR rotating object, it is described better by a Kerr metric. In Kerr spacetimes, Kepler third law gets generalized into (G_N=c=1)

    \[\Omega=\pm\dfrac{M^{1/2}}{r^{3/2}\pm aM^{1/2}}\]

where a is now the Kepler parameter. Reintroduce units to get instead


Kepler third law can also be extended, for instance,in Finsler-like general relativity. There, you could get


or even stranger formulae in other gravitationally modified theories of gravity are even possible. Therefore, if you modify gravity with extra dimensions or more general theories, you obtain corrections to the Kepler third law. Even simple rotating black holes provide a generalized Kepler third law (the above formula is for ecuatorial orbits only!) for orbitating bodies! Thus, observations on orbital patterns could provide you hints on modified gravity. Unfortunately, no observation is yet giving you a MOG (MOdified Gravity) or extended theory of gravity confirmation. It implies strong bounds on the possible sizes of these corrections or discard them till now!

To end this post, I will review the so-called cosmic speeds:

  • First cosmic speed. Namely, the orbital speed. For usual spacetime dimensions read


  • Second cosmic speed. It is the escape velocity/speed. It yields


  • Third cosmic speed. It is the escape velocity from the solar system. A naive calculation for Earth third cosmic speed gives


but the fact that Earth is also moving, let us reduce this value to a lower number, since V(oS)=V_S-V_o=12.3km/s, where V_o=29.8km/s is the orbital Earth speed, such as




  • Forth cosmic speed. You need naively V_4'=350km/s for escape from the Milky way, but as the solary system is moving with respect to it, you can easily show up (exercise for you!) that you would need only V_4=130km/s to escape, similarly to the case of the third cosmic speed.

Remember the instantaneous speed is:


Let me remark a final two constants (the hidden secret constant will be the topic of a future blog post about the full Kepler problem and its generalizations) formulae for the Kepler reduced 2-body problem:



where E_t, L are the total energy and angular momentum M is the total mass, \mu=(M_1+M_2)/M the reduced mass, and R the orbital major semiaxis, and e is the orbital eccentricity. Are you eccentric today? Problem: What would be the escape velocity from our Universe?

A summary table:

See you in a future new blog post!

P.S.: Some earthling speeds are

i) Rotational speed of Earth on the equator is about 1670km/h or 0.46km/s.

ii) Rotational speed of Earth around the sun is about 107000km/h, or about 30 km/s.

iii) Rotational speed of the solar system (the sun) around the Milky Way is about 230km/s (828000km/h).

iv) Milky Way speed towards the Big Attractor is about 611km/s, or about 2.2 million km/s.

v) Milky Way speed with respect to the CMB is about 2268000km/h or about 630 km/s.

LOG#220. Higgs&Symmetries.

    \[V(H)=m_H^2H^2+gH^3+\lambda H^4\]

Why the leaves are green? Why the sky is blue? Why diamonds are hard? The Quantum Mechanics can indeed answer those questions, and even more subtle questions. The Standard Model (SM) is the frontier in our knowledge of microscopic things, that is, the SM represents the superior quantum theory explaining the building blocks of the Universe (well, a 5% of it actually, but it does not matter for the purposes of this new blog post).

  • Six leptons (plus their antiparticles): (e,\nu_e), (\mu,\nu_\mu), (\tau,\nu_\tau).
  • Six quarks (plus their antiparticles times 8, color factor): (u,d), (c,s), (t,b).
  • Four bosons (up to gauge charges and their antiparticles): W^+,Z^0,\gamma, g.
  • The mass giver, the Higgs field, H_0, for elementary particles. Gluons and photons are higgs-transparent, bosons that interact more strongly with the Higgs field are more massive.

Timeline(short review):

  • 1932: there are protons, electrons, the neutron and the positron.
  • 1937: the muon is discovered.
  • 1940s: hadron explosion. Pions, kaons, lambdas, deltas, sigmas and other exotic states are discoverd. Puzzlement.
  • 1970s: quark theory (aces). Previous S-matrix approaches are substituted by the QCD gauge theory.
  • 1980s: gauge bosons discovered.
  • 1987: supernova in Magallanic Clouds. Neutrinos come first than photons at SuperKamiokande.
  • 1995: top quark is found.
  • 1998: neutrino oscillations are confirmed. Neutrinos are massive. First evidence that there is something beyond the SM.  Dark energy is found.
  • 2012: Higgs-like boson discovered at the LHC. No new physics signal is found, circa 2019.

Interactions and forces keep united together everything. Why there are 3 generations with increasing mass? Flavor problem. Why the Higgs mass or the electroweak scale so small compared with the Planck mass/scale? Hierarchy problem. Little hierarchy problem: why are neutrinos lighter than other SM particles? Why relative forces between interactions are the way we measure them? Pions are indeed like Van der Waals forces like the one for atomic nuclei. Pion theory is an effective theory for nuclear forces.

There are 4 forces with messengers or messenger particles: gluons, photons, W and Z bosons, Higgs particles and the graviton. Should we consider the mass giver Higgs like another interaction? Particles are classified by angular momentum:


for any entire number n=0,1,2,3,\ldots\infty . Then, there are entire half-spin particles (following the Pauli exclusion principle), and there are entire-spin particles. These are fermions and bosons, matter particles and force messengers. How did interactions arise from fields? The keyword is symmetry. I am not going to talk about SUSY and why it is inevitable in some form we do not know today, but I will tell you about symmetry and its consequences: conservation laws. Indeed, Emmy Noether two theorems go even beyond simple conservation laws. E. Noether two theorems are about invariance under finite and infinite symmetry groups. Global symmetries give you conserved quantities, local (infinite dimensional) symmetries give you identities between field equations/equations of motion for particles/fields. Likely, Noether’s theorems are the 2 most beautiful theorems in mathematical physics (if you know any other theorem rivaling them, let me know!).

Have you ever wonder why energy is conserved? Let me give you some hints:

Example 1. Free particles. 

Energy is conserved. There are symmetries under translations in time! Suppose there is a particle with mass M. Take the Newton law:


for free particles, it yields that




Name that constant kinetic energy, or total energy, so you get


Simple and beautiful. Energy is something that is conserved when you have something that is invariant under temporal translations, i.e., motion in time.

Example 2. Motion under constant force.

Generalize the above example to the motion of a free particle under constant force F. Then, you get


Then, again this is invariant under shifts in time t'\rightarrow t+\varepsilon, and there is a conserved energy function, namely




Now, use the equation of motion above \ddot{x}=F/m, and you effectively get that \dot{E}=0. Energy is conserved again! This is important in the case of gravitational field near the surface and other examples!

Example 3. Motion of simple harmonic oscillators.

Now, you get the equation of motion (EOM):


Again, under time translations, the EOM is invariant, so there is some energy functional. It yields the classical formula holds:




Example 4. Motion in a gravitational (newtonian) field.


Again, the classical energy function (by time invariance) reads


Check of constancy, using the EOM as above:


Symmetries or invariance are tied deeper into conservation laws by the Noether first theorem:

  • Temporal translation invariance implies energy conservation.
  • Invariance under spatial translations imply linear momentum conservation as well.
  • Rotational invariance means conservation of angular momentum, certain bivector or antisymmetric matrix in higher dimensions.
  • Changes in reference frames are tied to conservation of invariant mass and spin of particles. Boosts or Lorentz transformations have certainly non-trivial conservation laws (a center of mass-like conservation law). Boosts in galilean relativity has motion of center of mass constancy under these transformations.
  • Higher non-trivial symmetries have correspondingly conservation laws. Some non trivial examples are the Kepler problem, string theory or general relativity theories. Also, invariance under scale transformations have conservation laws.

Remark: under boosts (SR), you get the conserved quantities


Remark(II): discrete symmetries, produce multiplicative conservation laws of parity (P), charge conjugation (C) and time reversal (T). Particles are usually classified in the Particle Data Booklet by J^{P}.

Remark(III): hidden anomalous symmetries in the SM are baryon number or lepton number conservation. Beyond the Standard Model, these numbers can be violated and thus, proton or other generally stable particles could decay. Experimentally, fortunately for us, \tau_p\geq 10^{34}yrs. The proton is the lightest particle with baryon number not zero, so it can not decay. Baryon number is usually defined as B=(n_q-n_{\overline{q}})/3. Muons are unstable in microseconds, rho particles are unstable in yoctoseconds.

What happens with internal (non-space-time) symmetries? In any quantum theory and Quantum Field Theory, global internal symmetries are associated to conservation laws and quantum numbers. Even discrete symmetries have conserved quantities (discrete and multiplicative quantities inded). Particles are field excitations. So, how internal symmetry arise in field equations. Let me assume A=A(x) and B=B(x). Suppose a symmetry A\rightarrow A+q and B\rightarrow B-q is certain symmetry. The field equation is:


Under global q=constant symmetry transformations, you can check easily the invariance of the above field equation. Local field symmetries imply the existence of compensating fields and identities called Noether identities between field equations. That local gauge invariance implies the existence of gauge fields is a tryumph of modern mathematical physics, and it is due to Noether in the end. Under LOCAL gauge transformations,

    \[(A-B)\rightarrow (A-B)+2q\]

    \[(A+B)\rightarrow (A+B)\]

And thus, the above EOM is NOT invariant. However, you can RESTORE invariance introducing gauge (compensating) fields. How????? Let me show you a Dirac-like equation example. Write the EOM


The local U(1) invariance under phase trnasformations of the wave function or field

    \[\Psi\rightarrow \exp\left(i\alpha\right)\Psi\]



You can check that the Dirac equation above i(\partial_t-\partial_x)+m=0 is NOT locally gauge invariant. You must introduce a new field, the gauge field A, transforming under symmetry as

    \[A\rightarrow A-\partial_x\alpha+\partial_t\alpha\]

Then, a modified field theory (charged) arise, that IS invariant under gauge transformations. Let us do the calculations explictly:



    \[\Psi\rightarrow e^{i\alpha\Psi}\;\;\;A\rightarrow A-\partial_x\alpha+\partial_t\alpha\]

transforms and you get


Local invariance holds now!!!!!!!This U(1) trick is mimicked for SU(2) and SU(3) symmetries (non-abelian) and weak charge (flavor) and color charge are thus related to non-abelian gauge invariances! A problem arises, however, within the SU(2) case (not in the color force case). The symmetries of the W and Z, when local, have a bad behaviour when mass is present. In other words, the mass of the gauge bosons W,Z spoils gauge invariance. That is where Higgs fields and the Higgs mechanics arise. In 1964, Higgs (and indepently other researches, by the particles are due to Higgs himself, with Nobel Prize merits) introduced a new field and new particles to restore weak gauge invariance when mass terms are present. The name spontaneous symmetry breaking or hidden symmetry is also used here. The ideas are powerful:

  • There is a new field, the Higgs field, permeating the space-time, like a fluid. Mass is similar to friction with this relativistic field.
  • Associated to the Higgs field, there are excitations of the field, called Higgs particles. Higgs particles are waves in the Higgs field, and these waves give masses to interacting particles with the Higgs. Transparent particles to the Higgs fields get no masses.
  • To get a Higgs field requires a lot of concentrated energy.

Simplest Higgs set-up:

Take an electron field, \Psi (it can also be a boson field like the W or Z).

Under gauge symmetry, it transforms as \Psi\rightarrow q\Psi and its mass term m\Psi^2\rightarrow mq^2\Psi^2. Here, q=e^{i\alpha}. Thus, mass term is not invariant under gauge local transformations. Define H such as under symmetry

    \[H\rightarrow \dfrac{H}{q^2}\]

Note that we will get gauge invariance plus an interaction of type H\Psi^2. Then, the product H\Psi^2 is invariant under gauge transformations. Expand H around a vacuum as

    \[H=\langle H\rangle_0+h\]


    \[H\Psi^2=\langle H \rangle_0\Psi^2+h\Psi^2=m\Psi^2+h\Psi^2\]

where we can define “mass” as

    \[m\equiv \langle H\rangle_0 \lambda\]

and the Higgs coupling

    \[h=\lambda \overline{h}\]

In general,

    \[y_pH\Psi^2=y_p\langle H\rangle_0\Psi^2+y_ph\Psi^2\]

Thus, you get a generic mass term for ANY fundamental (elementary, not composite) particle

    \[m=y_p\langle H\rangle_0\]


In other words,

    \[\mbox{Mass}=\mbox{Yukawa constant}\times\mbox{Higgs v.e.v.}\]

Experimentally, the Higgs v.e.v is about 246 GeV, and it was know much before the Higgs boson discovery in 2012. However, some issues in the SM can not be solved:

  • The Higgs field introduces introduces new Yukawa interactions not coming from any SM symmetry.
  • The Higgs mass itself is not fixed by any SM symmetry. Indeed, it could have been in principle very heavy, but some theoretical arguments were known before the Higgs discovery. It the Higgs existed, it could not be very heavy without spoiling the properties of the known SM. You should have invented other mechanism if the Higgs field were not been found at the LHC. SSB and Higgs fields could not be too heavy. However, something protects the Higgs to become heavy from loop corrections to mass. What it is? We do know. It could be supersymmetry (SUSY) or any other new kind of symmetry.
  • Higgs field is close to be metastable in about t\sim 10^{500}yrs. New particles can stabilize the Higgs vacuum, but it is a hard problem!
  • The Higgs field can NOT be the dark energy, it is too heavy. However, the Higgs field could be the inflaton. No proof of this is known, it is speculative.
  • The Higgs field is not consistent with the known value of the cosmological constant. However, we can not be eager here, since we do not know what dark energy is and we do not what the Higgs-like partice truly is. GR is not a quantum theory, not a YM theory at least, and we do not know how to make GR consistent with the SM. That is why they are usually considered apart to each other.
  • The searches for any theory beyond the SM, BSM theories, like string theory or loop quantum gravity, is being guided by the same principles that Maxwell used to derive his electromagnetic synthesis in the 19th century. Similarly, Einsteind was guided by symmetry to SR and to GR from simple principles. The SM and the GR are believed to be effective field theories, approximate theories at low energies. We need GUTs or TOE for a further final unification.
  • The origin of mass is now turned into a problem of why Yukawa Higgs couplings are the values we observed. Even the Higgs field has its own self-coupling. We need a better theory to understand the origin of mass.

What is the future of fundamental physics? Expensive experiments and cheap experiments. Brilliant minds observing the Universe with new tools. Philosophy is not useful as before. Scientific advances require feedback from theory and experiment. New colliders (CLIC, the muon collider, the Chinese 100TeV collider, the FCC,…) will require complementary projects and dark matter/dark energy extra experiments. Gravitational waves, gamma ray astronomy, neutrino astronomy and multimessenger astronomy is a new exciting field. Theoretical speculations, like those guided by philosophy, are not useful without experimental support. We need data and test hypotheses!

LOG#219. Stranger Planck things.

“(…)Suzie, what is the Planck constante value?(…)”

Nice to see the quantum fundamental constant in any series, but only shallowly. The rest was awful Science, and I guess bad advisors (or even null advisors) where considered to the big scientific failure of Stranger Things III most epic moment. I am not counting the song as as failure since I love The Never Ending theme by Limahl. Anyway, to the subject…

Planck constant is essential in the new S.I. and also it is a fundamental constant of physics. It is related to the action functional and to the fundamental relationships of angular momentum and energy quantization. Originally, the Planck constant energy quantization emerged since the black body radiation was divergent. Planck (1900) introduced quantization of energy to solve the divergence. Later, Einstein used light quanta to explain atoms, and later, the quantization of the action was noted more fundamental that energy or angular momentum quantization. For instance:

    \[P= \sum_\gamma\exp(iS_\gamma/\hbar)\]

    \[E=\hbar \omega=hf\]



where \hbar=h/2\pi. hbar \hbar is the Planck constant divided by two pi. The measure of h (or \hbar) has evolved into time. \hbar or h are not a formula. It is a constant taking values in certain system of units. By the way, nothing is told about in which units (I suppose SI, but given the fact the TV series is american/English-bases, I had a fear to see a non-SI value too). Anyway, here you are the values of Planck constant through time:

1st. 1969. Here, h=6.626186(57)\cdot 10^{-34}J\cdot s.

2nd. 1973. h=6.626176(36)\cdot 10^{-34}J\cdot s. THIS is the value that should have been seen on the screen at Stranger Things III, given its time frame!

3rd. 1986. This is the future by the time happening at the Stranger Things III season, as they are the remaining values…Here,

    \[h=6.6260755(40)\cdot 10^{-34}J\cdot s\]

4th. 1998. Here, you see

    \[h=6.62606876(52)\cdot 10^{-34}J\cdot s\]

5th. 2002. At this time, I wonder why CODATA and NIST reduced precision here, but it seems it happened

    \[h=6.6260693(11)\cdot 10^{-34}J\cdot s\]

6th. 2006. Here, you find out

    \[h=6.62606896(33)\cdot 10^{-34}J\cdot s\]

7th. 2010. There, you get

    \[h=6.62606957(29)\cdot 10^{-34}J\cdot s\]

8th. 2014. This IS the value of Planck constant Suzie recalls (excepting he forgot the power of then and the units!) in the episode:

    \[h=6.62607004(81)\cdot 10^{-34}J\cdot s\]

Question: did Suzie live in the future or did she travel to the future to get the 2014 h value?

9th. 2018. The new S.I. exact adjustment (to be revised likely in the near future):

    \[h=6.62607015\cdot 10^{-34}J\cdot s\]

Remark: as it is and EXACT definition, no uncertainty is written between brackets!

Table with summary, even with previous values of Planck constant than those above:



-Suzie is from the future (like Frequency, the movie!).

-Suzie is a time traveler (like Back to the future).

-Stranger Things writers have bad (or null) scientific advisors.

The most probable explanation is the latter, for me. Any other?

Comment: all of this can easily checked using internet quite fast. Go to the CODATA or NIST webs and check the values and the units, but please, recall the system of units we scientists use, and do not forget the power of ten if there is a scale above in the data table!

LOG#218. Atomic elephants.

Have you ever asked yourself why there is no elephants or humans with atomic sizes? Have you ever asked why Earth is so big in comparison with atoms? Why nuclei are 10000 times smaller than atoms? I will be discussing these questions in this short blog post…

1st. Why R_{Earth}>>R_{atom}?

Answer: It is related to the relative force between gravity and electromagnetism due to the Gauss law! In 3d space (4d spacetime) both, Coulomb and Newton laws are inversely proportional to the square of distance (spherical shells!). Then:


and from this you get

    \[\dfrac{R_E}{R_a}=\sqrt{\dfrac{Ke^2}{GM_p^2}}=\left(\dfrac{e}{M_P}\right)\left(\sqrt{\dfrac{K_C}{G_N} }\right)=10^{18}\]

The reason is that the ratio of the electron to proton mass, and the square root of Coulomb to Newton constant is big. Earth is big, because of the nature of proton mass to electron charge, and the ratio of the coulomb to newton constant. Thus, you can not find atomic planets or atoms with the size of the Earth…Not go crazy and ask to change of the values of those constants…

2nd. Why R_{Elephant}>>R_{atom}?

Answer: Similarly, you can can compute the ratio between the the gravitational energy of an elephan and the electric energy of any atom:


Then, you get

    \[\dfrac{R_{El}}{R_{atom}}=\left(\dfrac{GM_p^2}{K_Ce^2}\right)^{1/3}\sim 10^2(10)\]

Therefore, elephants or humans are constrained to a few meters…Great!

3rd. Why R_{atom}=10^{5}R_{nucleus}?

Answer: It shows to be the hardest to understand. The secret behind this answer lies in the Yukawa force and the exponential screening (short-distance) behaviour of nuclear forces that make them to be confined to a few proton radii (and of course, to the coupling constants). Take for instance the strong force case, then




Plugging \alpha_s\sim 1, \alpha0=1/137, r/2r_0\sim 5, you guess that the above ratio is


Fantastic! Allons-y!

Proton decay is expected naturally at some point between 10^{45}yrs or 10^{120} yrs from “standard assumptions” of virtual black holes or space-time foam expectations. It is inevitable to get some dimension 5 operator for neutrino masses in the symbolic form (LH^+LH^+)/M at about 10^{10} or 10^{14} GeV, and leptoquarks triggering proton decays at about 10^{16} GeV even without the avove quantum gravity general prediction. There are also interesting dimension 6 electric dipole operators for neutrons and other neutral particles at scales about 30 -100 TeV! The LHC is hardly sensitive to these coupling, beyond non direct measurements, but future colliders at 100 or 1000TeV (the Pevatron!) could test the flavor changing process due to kaon-antikaon systems. Much more subtle is the issue of Higgs mass, the baryon and lepton number symmetries, and the sources of CPT violations we have already investagated the past and current century. It is a mess nobody understands yet fully. We have understood the kinematics and dynamics of spin 1 and 1/2 particles, even shallowly explored the 2012 Higgs (spin zero) discovery! Higher spin particles? Spin two is gravity and surely it exists, after the GW observations by LIGO and future observations. There is the spin 3/2 particles, expected from general supersymmetry arguments. Spin 5/2 or spin 3 and higher interactions are much more complicated. Excepting some theories of gravity as hypergravity, Vassiliev higher spin theory, and higher spin theory coming from string/superstring theory, are really hard to analyze. In fact, string theory is an effective theory from higher spin theory, containing gravity! But the efforts into the higher spin understanding of higher spin spin theories are yet to come. We are far from a complete view of higher spin theories.

What about the LHC expectations? Colliders are artifacts whose performance is quantified by a quantity called luminosity. Luminosity is related to elementary cross sections from particle physics, and the number of events per unit time is essentially luminosity times cross section…

    \[\dfrac{N_E}{Time}=\mathcal{L}\cdot \sigma\]

For the LHC, \mathcal{L}\sim 10^{34}cm^{-2}s^{-1}, and SM cross sections are measured in barns 1barn=10^{-24}cm^2=10^{-28}m^2. The LHC works at 14 TeV, 1TeV^{-1}\sim 10^{-19}m. Typical electromagnetic (strong) interactions scale like \sigma_{em}=\alpha/(1TeV)^2, and thus the cross section is about \sigma\sim 10^{-36}cm^2=1pb, where pb denotes picobarn. Data acquiring is measured in terms of inverted barns (integrated luminosity over the number of events). Independently you calculate amplitudes with Feynman graphs, the amplituhedra or any other gadget, these things are surely universal. Neutrino interactions are related to the Fermi constant or the M_W masses, and i f you go to search for some universal principle for scattering amplitudes, you will find out that on general grounds, for spin one consistent theories provide you polarization sums of type \sum g_j=0, and for spin two \sum g_jV^\mu_j=0. There are likely issues at loop corrections, but surely the universal laws of physics should conspire to be coherent in the end even with loop corrections. How? That is puzzle.

Finally, let me show you how to calculate the time a photon needs, in the sun, to escape from the sun. The sun is a dense medium, so photon interacts with atoms and dense matter in the core before popping out from exterior shells and arrive to Earth.

Data: solar density at the core \rho_c=150\times 10^3kg/m^3. Opacity is \kappa\approx 0.2kg^2/m.

The mean path is L=1/\kappa\rho_c=3\times 10^{-5}m. Solar radius is about R_\odot=7\times 10^{8}m. A random walk of the photon in N-steps yields d=L\sqrt{N}, so if photon survivies, it takes for d=R_\odot a number of N=(R_\odot/L)^2 steps. Finally, the total distance traveled by the foton is D=NL, so D=R_\odot^2/L.  The time takes a photon to travel D meters is obtained dividing by the speed of light, so


Plugging numbers

    \[t_\gamma=\dfrac{49\cdot 10^{16}}{3\cdot 10^{-5}\cdot 3\cdot 10^{8}}\sim 5\cdot 10^{13}s \sim 1\cdot 10^{6}yrs\]

So, a photon scape from the sun interior to its surface in about 1Myr. If you enlarge opacity you would get a higher number, and if you let the mean path to go greater, decreasing opacity (matter transparency!) due to density effects, you could get escape times of about 10^3-10^5yrs. Note that if photon did not interact, it would escape from the sun in a few seconds, like a neutrino!

May the atoms be with you!

LOG#217. The 2 pillars.

Time of Cosmic Voyages https://www.youtube.com/watch?v=xEdpSgz8KU4

This is a long post, despite not being a special post (remember I make one of those every 50 posts). What are we going to see here? A five part post!

Part(I). A descriptive prelude.

  • Limitations of our electromagnetic observations? Reductionist vs. holistic visions? We were long ago limited by electromagnetic visible observations. No more today. The end of the reductionism with atomic or quantum physics seems to have reached a limit. Quantum entanglement and contextuality change the global view. Complementary holds.
  • Artifacts. Particle physics uses wonderful detectors and colliders. From cyclotrons, particle accelerators, synchrotrons, others. We will see why this whole stuff matters for your daily life and health.
  • Game rules. Relativity plus quantum mechanics. The two pillars of physics (central topic of this post!). Matter and energy interplay via E=mc^2, wavelength versus momentum interplay via duality \lambda=h/p.Precision and accuracy from these two big pillars make difficult a futher unification.
  • Uncertainty principle. Concentrated energy as a microscope up to scales \Delta X\sim \hbar c/\Delta E, L\sim 1/p. There are some generalized uncertainty principles out there.
  • From the 150 years old Periodic Table to the cosmic roulette of particles. Is a 1/7 reduction of the numbers of elements good enough to be kept forever and ever?
  • Action is quantized! That is the hidden mantra of Quantum Physics. Energy quantization or momentum quantization are secondary. The key magnitude is action (actergy!). Forget what you think about quantization a few seconds. Quantization main object is action (actergy). Other quantizations are truly derived from the action.
  • What is the relative force of the 4 fundamental forces? Compare them to different distance scales. See it below these lines.
  • QCD mass versus Higgs derived masses. You will learn (or remember, if you already knew it) that the mass of your body is essentially a QCD effects. Spoiling the Higgs, not make you massless. Higgs particles via the Higgs field and spontaneous symmetry breaking only give masses to elementary known particles. Of course, Higgs field is important since it manages atoms to become stable and bound as well,…Otherwise electrons would be massless. BUT, the proton mass is, as many other hadron masses, a QCD effect. Why nature chose to keep this accident? Well, it is fortunately for us, since atoms and complex objects depend on proton stability (or long-lifetime metastability), but I am not a big fan of the anthropic principle or why the laws of physics are so well tuned to life to emerge.
  • Protons are complex objects. Textbooks, circa 2020!, seems to be obsolete like the Terminator! I mean, how many of you keep thinking protons or neutrons like solid big balls instead of wibbly wobbly timey wimey stuff? Just joking, whovians. I know you know what the time vortex equation truly is. Are elementary particles balls?
  • Fine tuning of parameters, stars and the origin of the elements from the primordial nucleosynthesis at the early Universe.

The SM is formed by the following set of elementary particles (up to more complex counting systems including charges and polarization modes!):

17 particles only! Usual atoms are made only of the first generation as good approximation, so you passed from 118 (or more) periodic table elements to 17 particle types! That is a factor 7 reduction! A cosmic wheel way of representing these particles also exists:

Moreover, extremely short timelifes are for the Higgs and the top quark. Using the relationship between energy(mass) and lifetime, you get that Higgs bosons or top quarks life about 10^{-25}s or 10^{-26}s, that is about 0.1 or 0.01 yoctoseconds! However, we are far away from Planck time physics (about 10^{-43}s). The above circle seems a pokeball. Anyway, it is a tryumph of reductionism. Everything reduced to combinations of those particles. By the way, the Higgs particle determines at what distances particles interact. Particles that are Higgs-transparent are massless and they act on infinitely large distances. That is the case of electromagnetism and gravity. Subtle point, gluons, despite being massless, are confined into hadrons due to non-abelian features and to confinement. Top quarks interact strongly with the Higgs. strangly very strongly. That is why tops are heavy. Similarly, electroweak bosons W,Z also interact strongly with the Higgs, but a few lesser than the top, and get about 100GeV masses. Particles interacting more with the Higgs field and the higgs particles, are thus more massive. The Universe of particles is being explored right now at the LHC with 14TeV smashes. We see particles everywhere. From subnuclear distances to scales about the zeptometer at the LHC: Any particle is tagged with some particular properties called quantum numbers. Mass, electric charge, angular momentum and spin, parity, weak charges, hypercharges,…Note that you are made of a big number of particles. Being about 70kg, supposing protons are what make you massive, you are about 10^{29} proton units composite.

Fields in the continuum and particles in the discrete are not contradictory views. Particles are just the excitations of the fields. The usual picture of continuum versus discrete view of the Nature is turned into a much more complementary unified view today. Matter-energy and spacetime are made of FIELDS. There are only a few fields in the Universe, maybe manifestations from different mirrors of a single field (this is the unification dream, the final theory treating everything as a single force and field). Kant, Einstein, Faraday, Maxwell, Newton, Leibniz and many others have taught us a lot about these visions. During the 19th century, a new formulation of classical physics was built. It is a non-deterministic approach. It is based on lagrangian and hamiltonian dynamics. Systems are encoded into gadgets called lagrangians (or lagrangian densities in field theory). They overcome the limited and sometimes hard to apply newtonian dynamics \sum F=Ma. In newtonian dynamics, the problem to understand the Universe is reduced to understand what mass is (unanswered in newtonian physics!), and to know what are the forces of the Universe. In the rational mechanics lagrangian and hamiltonian methods, you are reduced to find out the symmetries of the problem, and to compute the minimal action such the equations follow from a minimal (or more generally a critical point) of the action functional. For first order lagrangians and classical hamiltonian dynamics, the equations read

    \[\dfrac{\partial L}{\partial q}-\dfrac{d}{dt}\dfrac{\partial L}{\partial \dot{q}}=0\]

    \[\dfrac{\partial \mathcal{L}}{\partial \phi}-\dfrac{d}{dX^\mu}\dfrac{\partial \mathcal{L}}{\partial \left(\partial_\mu \phi\right)}=0\]

for lagrangians of particles and fields respectively, and

    \[\dfrac{\partial H}{\partial p}=\dot q\]

    \[\dfrac{\partial H}{\partial q}=-\dot p\]

for hamiltonians of particles or fields. The first set of equations are named as Euler-Lagrange (EL) equations, and the latter are the Hamiltonian Equations (HE). The problem to compute the forces are turned into the problem of finding or guessing the lagrangian or hamiltonian, and the dynamics reduced to the understanding of the symmetries via the action principle and the Noether theorem. What is the problem with all this? The quantum mystery is just a mystery of the vacuum. Vacuum mysteries around you. Aristotle had a notion of purpose in his view of Nature. The act. With Galileo, we learned that there is act-less motions! Galileo refuted Aristotle view of the motion. That was further mathematically developed by Newton, Leibniz and others. What Galileo did experimentally and very naively matemathically, Newton and posterior scientists would do it precisely. However, it turns that classical physics, your own perception of the Universe is biased. Classical physics is only an approximation. That was indeed anticipated by the EL and HE approaches, and the action principle, where all the possible configurations in principle are there, but only one (the classical one by reasons we will see later), are tested by Nature. That Nature tests everything is indeed the main argument of quantum physics! The usual non-deterministic view of Nature in quantum physics is surely a bit deeper than the eventual EL or HE approaches to classical mechanics, and have stunned everyone since then. But it is true, at least, with the precision of our current experiments. Nobody knows the future for sure, but quantum mechanics and non-deterministic probabilistic statements are here to stay a long time (if not forever).

What is the point with quantum physics? Well, take for instance the neutron decay process

    \[n\rightarrow p+e+\overline{n}\]

It can happen, if free, in about 15 minutes (888 seconds more or less). Quantum physics tell you that you CAN NOT predict when the decay is going to happen. You are only allowed to ask by the probability of neutron to decay in certain time period. Particle decays are essentially quantum phenomena, and statistically poissonian. You can not predict when something will decay, but knowing some distribution of probabilities, and statistics, you can predict probabilities for events to happen. Thus, quantum physics is just a framework that tell you “probabilities of events”, instead of telling you what is going to happen, you can only ask what is the probability of something to happen? Of course, the caveat here is that even quantum mechanics has tricks…Under certain circumstances, you can find out imposible events or even sure events. The sun is surely wasting its hydrogen fuel. Quantum photons arrive to us thanks to quantum physics (the tunnel effect, essentially). NO matter if you hate the philosophy of quantum mechanics. It allows us to exist and live.

Well, can we do average? Yes. QM let you guess quantum averages from classical and quantum observables. Vacuum polarizes, and you can study particle production processes, even with antiparticles. Accelerator and colliders show that quantum fluctuations are not just bubbles. It is just like a mesh of beams spreading out and interacting with everything in the sorroundings. Quantum physics, then, say that every possible trajectory between A and B happens for quantum particles. Between A and B the states are undetermined (unless you measure, of course) and likely entangled. Quantum physics, in terms of M. Gell-Mann, is the supreme totalitarian theory. Gell-Mann’s totalitarian principle stated originally that everything that is not forbidden is mandatory, today it has been reformulated and upgraded.

Totalitarian QM principle: everything that is not forbidden can happen or will happen.

This principle is essential to have a broader view of what QM is (even if not completely understood for you or experts!). Moreover, any fundamental interaction in the SM has a vertex Y structure (giving up loop corrections) in its simpler terms.

How could we understand the totalitarian principle and the action formulation of classical AND quantum mechanics better? Let me begin stating that Feynman graphs, even if complex, are a useful part of modern physics. Quantum fluctuations and force fundamental interactions are usually represented by Feynman graphs. They represent events in spacetime. Just as SR represents relativistic events as light-cones or space-time diagrams, particle physicists use Feynman graps to model fundamental physics and interactions. Some examples from the SM:

That particles follow A SINGLE history is past. They follow up all the possible histories at once. Thus, Laplace’s dream of a machine predicting the ultimate future of the Universe can not accomplished totally in a QM world. We can only say what are the odds of our future…In fact, QM has to use approximations and statistics since mouses, men and women, elephants, or big things are composite from many particles. Predictions would become IMPOSSIBLE without an statistical and probabilistic approach. Certainly, it is also possible you will know the bayesian approach to probability and science. Well, QM is the ultimate expression of bayesianism in the scientific world. Of course, you can check that some statements are TRUE and FALSE. But only, with the precision of your experimental set-up and current theories or hypothesis. In summary, record this strongly: QM says only probabilities or decays, not when decays will happen…

What about the action principle? Action principle gets enforced in quantum mechanics. In fact, the reason why we “see” the world as classical is twofold: Planck constant is small, and classical trajectories, those who are minimal, provide the main weight to the quantum action. The classical action is a magnitude equal to mass times the proper time, or energy times the proper time (with c=1), modulo a minus sign:


The quantum amplitde is something like the sum

    \[A(i\rightarrow f)=\exp(iS_1/\hbar)+\exp(iS_2/\hbar)+\cdots+\exp(iS_n/\hbar)\]

This sum is a complex number! The big thing is that non-dominant non-classical paths “cancel” (or almost), and you are left with the minimal action principle of quantum mechanics. It is not that the other trajectories do not happen, it could happen. They are orthogonal and interfere destructively, the classical path is reforced or enhanced by quantum interference!  There is a simple gedanken experiment (sometimes even simulated via applets in some websites). Take a light beam projector with low intensity I. Take a device that counts the number of clicks when photons arrive. Of course, this depends on \lambda and \nu, the wavelength and frequency f of the light. You can count the clicks as they hit a sensor in the screen. Sometimes the light pushes 36 clicks, other times it pushes 16 clicks. In deed, you can write down a formula giving you a discrete relationship between the numbers of clicks and the path the photon followed to the detector, something like this

    \[N=\sum_{paths}\vert\pm 1\vert^2\]

The indeterminism is just result of the interference of the paths! But in this simple example, every trajectory has the same absolute value. Going further, the plus or minus one is due to the phase of a complex number. the action is A_1=-E_1t_1, A_2=-E_2t_2,…A_n=-E_nt_n in general. Divide the action by \hbar to get a pure dimensionless number, multiply by i and then get it exponentiated, so you calculate \exp(iS_j/ \hbar). The probability of any event is given by the sum over all possible path

    \[P=\vert \mathcal{A}(i\rightarrow f)\vert^2=\vert\sum_{\gamma} c_\gamma e^{iS_\gamma/\hbar}\vert^2\]

It shows that this probability is maximized with the critical action, that is the minimal action principle of classical physics. Equivalently, think about paths vectorially! The sum is optimized for classical trajectories, maximal action, minimum time or the shortest paths. Have you ever imagined to be quantumly teleported or abducted by an alien civilization to another galaxy? It could happen spontantenously too, but with very very low probability, despite that event is allowed by the laws of quantum physics. Unfortunately, there is no alien close to me to show me the Universe beyond our galaxy more directly.

Part(II). QFT: a tale of where we stand in high energy physics.

  • Vacuum is the main fundamental object in quantum field theory. It is indeed related to the Fourier expansion of field operarors


The quantum realm is the world of the quantum vacuum or the quantum void. Call it voidium if you want. Quantum fluctuations allow you to even surpass the conservation of energy by tiny time shifts (commit a robe, nobody would notice if you return the money before the shop and the register machine open again; that is the essence of the uncertainty principle, everything that can happens will happen, everything? Well, not quite,but this is again another story).

  • What are the symmetries behind the known possible fields and interactions?

There are no forces in quantum physics, indeed, there are only interactions betwen particles. The formal distinction of interactions is just, likely, a misnomer to different classes of phenomena, but it is useful yet at these times. Why? Because if matter and energy are, well, related with E=mc^2, distinctions between mass or matter and energy are quite a question not of mass or energy, but other features, indeed related to the quantum world. Magic word: spin! Particles have angular moment or, internal rotatory properties we call spin. Fundamental forces are transmitted by entire-spin particles, matter fields are spin one-half particles and spin zero or one particles.

  • Bosons and fermions as glue and LEGO pieces making up everything. You and me are not so different after all.

Spreaded out from the origin of time at t\sim 10^{-43}s til the current 14Gyr\sim 10^{18}s, or about ten to the 61 Planck times, the Universe is big, as far as we know 10^{26}m bit, compared to the 10^{-35}m at its birth, ten to the 61 Planck lengths big. However, it contains atoms, planets, comets, stars, galaxies,…Different scales of masses…You are only some tens of kg, planets or moons are 10^{20} times that thing more or less. Stars have also different spectra. You can find out stars with hundreds of solar masses yet in the Universe…Compact objects (even if stellar) are weirder but also existent. Take a solar mass and reduce it to the size of a continent. You get essentially a white dwarf star. To the size of a city, you get a neutron star. If you could compriss them further, you would get black holes. Black holes are indeed the most massive objects we can find in this Universe, beyond our Universe! The Universe has a mass of about 10^{53}kg. Its density is very low, about the vacuum energy density, of about 10^{-27}kg/m^3 (Planck density is about 10^{97}kg/m^3) or 1 proton per cubic meter. Outer space is vacuum, basically. In the other side you could have neutrinos, the less massive particle of the standard model (we do not even know their exact masses!), with about 10^{-39}kg. So, masses in the Universe, in hierarchy we do no understand, are distributed over 92 orders of magnitude, even more if you consider that dark energy could be some king of ultralight particle. Observed scales, not going to Planck scales, are separated by say 44 or 47 orders of magnitude in distances. Currently, the universe has a temperature about 3 K, when likely it began with Planck temperature, 10^{32}K. Old limitations of optical and telescopes were overcome. We have now new tools to observe wavelengths the human eye can not see unaided. Likely, machines will be showing us new ways to see the Universe we have already explored and initiated recently. Gamma rays, radioastronomy, neutrino astronomy or gravitational wave astronomy will the new powerful tools of the future for sure. Electromagnetism is also doomed from cosmological observations (at least in a one-time isotropic, homegenous time coordinate). The limit is the CMB. Beyond that, we will have to use neutrinos or gravitation. We can not see the very early universe before primordial recombination with photons.

  • QFT=Special Relativity+Quantum Mechanics=SR+QM. The first results of this fusion is the existence of antimatter (however the known Universe contains a very very low quantity of antimatter, fortunately for us!).

Rule games by relativity and quantum mechanics:




Relativity=Invariance under the Lorentz or Poincaré symmetry groups. You can classify particles with some numbers, just like you classify elements of the periodic table.  When are relativity and quantum important? Look at this plot size (energy) vs. velocity plot:

  • Leptons and quarks. What are their properties? That means to known about quantum numbers. Quantum numbers of elementary particles include  (rest, invariant) mass, angular momentum (spin), parity, electrical charge, hypercharge (weak charge) and sometime chirality or polarization degree (L or R for usual left-handed or right-handed polarizations).

  • We remember the 3 generations of the SM. But what is mass? Firstly, elementary particles get masses from the Higgs field. BUT, protons that make you and hydrogenated atoms and stars, or heavy nuclei, get masses from the strong force! The so called chiral symmetry breaking is the way in which hadrons get masses. Note that a proton is about 1GeV/c² of mass. But constituent u,u,d quarks are 2.3+2.3+4.8=9.4 MeV. So, where is the remaining proton mass? Hint: the proton is much more complicated than this naive 3-ball picture. Plug about 1 fm into the E-x uncertainty principle: E=hc/\lambda=10^{-26}/10^{-15}J, that are about 10^{-11}J of fluctuating quantum stuff. Protons can can not be imagined like a three-ball coconut. A proton is instead a result of the QCD vacuum! So, the proton is (uud)+(gluon kinetic energy)+(particle+antiparticle hadrons)+…Any kind of rubbish object. What is a proton then? Well, something like this will surprise you:

Mass from QCD is a highly non-trivial process. Indeed, some time ago, that process was called (I think it is yet called so, but it is horrible as name) dimensional transmutation or nonperturbative mass. Yet, 1% of the proton mass come from the Higgs field. You can possible compare this to residual electromagnetism in you daily life. Why walls do not drop off on your head? Electromagnetism is strong compared with gravity at common scales. 10^{42} times  stronger than gravity. BUT, electric charges compensate to each other, except, some residual forces, the Van der Waals forces (and some ionic or covalent variations), and you do not killed by walls thanks to electromagnetic residual forces from chemical bonding!

  • Hard part: Q.E.D. as quantum electromagnetism and how to intuitively get a picture of the quantum fundamental rule \mathcal{A}\sim\vert c_\gamma\exp(iS/\hbar)\vert^2.

Action principle comes from free in a lagrangian formulation. The totalitarian principle applied to strong fields (in both curved or flat spacetime) implies other incredible result in field theory (yet to be experimentally tested). Strongs fields CAN create particle-pairs. This is the Schwinger effect and it can easily derived from the action principle. From purely energetic views of SR and QFT, turning on a big enough electric or magnetic field you could create any suitable particle-antiparticle pair. For electrons in QED, you would get a critical field:



The values of these fields is very big. E_c\sim 10^{18}V/m and B_c\sim 10^{9}T. QED is wrong at very large energies, but electromagnetism and weak forces are unified at about 100GeV energies. Weak interactions are essential to understand radioactivity and how some particles “change identity” or flavor. This turns to be necessary for stars to exist. The proton-proton process giving rise to the stellar fusion is energetically possible, accidentally, and it is another surprising fine tuning of the SM:

    \[pp \rightarrow D+D\rightarrow ^3He\]

    \[^3He+^3He\rightarrow ^4He+p^+p^+\]

and the CNO cycle are not possible without the nuclear weak and strong forces observed features. However, we do not know why we observe 3 copies of particles with identical properties excepting masses.

  • Gauge symmetries. The fundamental \Psi'=e^{i\alpha}\Psi global versus local gauge symmetry transformations. These transformations do not change the physics and determine the interactions via the gauge field A=A_\mu(x)dx^{\mu}.

Quantum electromagnetism of QED is the result of the complex field and the QM structure of the vacuum. Wave functions are not directly observable. The can be partially observable due to phase shifts (Bohr-Aharanov effect) or via the Born rule. Majestic: you can only calculated the probability of field distributions in space-time. Quantum fields are generally complex-valued objects, and you get probabilities from amplitudes using the rule P\sim\vert\Psi\vert^2. That is. Physics do not change if you multiply, in QED, the wavefunction by a global phase


If you now turn the phase local, instead of global, that is, if you allow the phase to be changing in space-time as well, you are forced to introduce a new field if you want to recover invariance. This field is, for th U(1) case above, the electromagnetic field, A_\mu. This gauge symmetry determines the structure of interactions. And it can be generalized for non-abelian fields, like those required by weak and nuclear forces. Gauge symmetry tells you if you can forbid or allow certain interaction terms in the lagrangian device! Gauge symmetry also determine how the interaction arise between photons, electrons, W and Z boson, gluons (gluon and QCD are the exotic beast here, since they contain self-interactions not seen in weak or electromagnetic interactions).

  • The running of the “fundamental constants”. Due to quantum effects, the vacuum itself is not static, it changes. It polarizes. The amazing consequence of this over usual fundamental physics is that fundamental constants are not constant anymore. That is,


K is NOT constant, and thus \alpha is not constant. The polarization of the vacuum makes vacuum permittivity variable. Thus, \alpha=\alpha(r) or equivalently \alpha=\alpha(E).At nuclear distances, about 1 fm or femtometer scale (about 10^{-18}m), the usual fine structure constant is not exactly 1/137. At the LHC, in fact, at energies about 7TeV, the fine structure constant is about \alpha\sim 1/100. So, the running of the “constants” is slow. Indeed, it is a logarithmic variation ruled by the so-called renormalization (semi)group equation and the so-called beta function:


Note the differences and similarities. In a quantum world, charges and masses get dressed or renormalized due to the quantum fluctuations and the Heisenberg principle. Vacuum polarizes, and in the case of QED, there is screening in the coupling constant, increasing its value (the opposite effect happens in QCD, there is antiscreening).

The fine structure constant gets bigger with distance, or equivalently, plotted against the logarithm of the distance, alpha decreases with increasing distances, or it increases with the decreasing of the distance. The vacuum is a nasty object in QFT. You can visualize vacuum bubbles or loops from particle-antiparticle virtual particules popping out from the fundamental amplitude:


and possible many weird loops with subloops and topologies can also arise due to the symmetries of given interactions. For our 3 interactions, we get the SM as gauge theory. Quarks are tied or glued by gluons. There are 3 color charges. There are two electrical charges (one hypercharge) and 6 flavors for both leptons and quarks. Baryons are 3-quarks composite objects. Colors R (RED), G (GREEN), B (BLUE) and anticolor \overline{R}, \overline{G},\overline{B} making quarks must be colorless, since isolated quarks are not observable, every particle must be colorless. The weak interaction (or the electroweak interaction at scales of 100GeV) allows the change of flavor and weak charge.

The ideas above, of renormalization, vacuum polarization, undetermined intermediate states (virtual), running coupling constants, Feynman graphs, are iterated for the 3 interactions different from gravity. In QED you get 1 photon, in electroweak three vector massive photons and you get 8 gluons in QCD. There are 8 colorless ways to produce particles without color plus one extra colorless combination, any strongly interacting quark. Every SM particle has no color (excepting quarks), electric charge, flavor (weak charge) and hypercharge plus spin. Every particle, excitation of a single field, can be seen as wave perturbations in the field. The SM imposes:

  1. A simple gauge symmetry for the L part in (e,\nu), (u,d), the first generation.
  2. Gauge invariance and compensating field trick for any fundamental interaction.
  3. Optional (mandatory): mixing of generations is possible.
  4. Photon and gluons are massless, the W,Z, and H are massive by construction.
  5. Massive W,Z are problematic from the gauge theory viewpoint. This is what generated the creation of the Higgs field and the SSB mechanism. W, Z are exponentially supressed, and thus are unstable, decaying in short times.

Every particle has “polarization” modes, generally denoted by L and R. The SM is a theory for the electroweak theory plus the strong force (quantum chromodynamics, QCD) part explaining nuclei and hadrons. Recipe (oversimplification):

  • The SM is the Glashow-Weinberg-Salam model based on the gauge group G_g=SU(3)_c\times SU(2)_L\times U(1)_Y.
  • Electroweak forces are mediated by photons \gamma (massless) and gauge bosons W^{+},W^{-},Z.
  • The SM mixes particles between different generations and particles inside generations. The first generation comprises the main matter of the universe and it is stable (u,d),(\nu_e,e). Other two replicas of the first generation do exist. Why? Nobody knows for sure.
  • The SM does NOT contain gravity, negligible for particle interactions at the subscale for all the main circumstances.
  • That photons or gluons (the glue of QED and QCD) remain massless is due to the particular structure of the SM. The masses of the Z and the W are obtained like the other fundamental particles, via the Higgs field interaction.
  • SSB is just like a process similar to superconductivity and collective orientations of atoms/spins/particles in condensed matter systems. The fact that the Z-boson or the W-bosons are massive made weak and electroweak interactions short-range, unlikely gravity or usual electromagnetism.
  • The Higgs mechanism is a two-part device or gadget: it contains the SSB (spontaneous symmetry breaking) tool, and the dynamical part via a Higgs self-potential.
  • What is the precisions of the SM? About 1 part in 10^{12} in some cases! It rivals GR precision too! The magnetic moment of the electron measurements are such a precise measurements (anomalous for the muon case, a long standing problem in particle physics pointing out a BSM theory just like massive neutrinos). of (g/2)_{th} vs (g/2)_{exp}.
  • Open problems in the SM: nature of the Dark Matter (it can not be standard known SM particles), strong CP problem (why there is no electric moment of the neutron?), the hierarchy problem, the naturalness problem, anomalous magnetic moment of the muon, neutrino oscillation patters are very different from quark mixing patterns (via the different measurements of the CKM and PMNS matrices due to Cabbibo-Kobayahsi-Maskawa and Pontecorvo-Maki-Nakagawa-Sakata), why there are almost no antimatter in the Universe, the flavor problem (why 6?why 3 generations?), the nature of the QCD resonances, the early Universe picture of the particle physics from the SM particles (in particular the EW phase transition), the properties and nature of quark-gluon plasma.

The SM gives self-consistently a solution to the “problem” of how to get Z-W boson masses without spoiling the local gauge invariance of the fields. Mathematical details will be provided later. The Higgs mechanics is a two piece machine: a) breakdown mechanism, b) Higgs field dynamics (at least from a conservative viewpoint) are fully included in the SM. And, being more precise, the SM gets a maximal precision in the magnetic moment of the electron measurements. Compare the theoretical prediction (supercomputers and high calculus needed to compute it!):


with the experimental result


Let me point out these results are not the last data or theoretical predictions. There is certain tension betwen theory and experimental resuls but it is not a huge one. 12 decimals of precision is quite a thing. Imagine to know the distance to the sun with such a precision.

By the other hand, there IS a big one point where new physics does arise in the SM. Neutrino sector. We do know from the last years of the 20th century and from current neutrino beams, solar neutrino experiments, reactor experiments and from cosmological data a lesser hint, that neutrinos are special. Neutrinos are not only the smallest chunks of matter you can get in the SM (yet, their concrete masses are not known!). Neutrinos are transgender or travestis! Neutrinos come in 3 flavors or species (at least, the SM neutrinos, theories do exist with more than 3,…Why 3 light neutrinos? Why left-handed?). It showed that neutrinos can transform into the 3 types when travelling long distances! In fact, there is a similar phenomenon inside hadrons. Quarks also mix! Transitions between neutrino types are modeled by a gadget called PMNS matrix, or neutrino oscillation matrix. Formally, there is also a CKM quark mixing matrix too. Mathematically:



Experimental data shows you that U_{CKM}\sim \mbox{diag}(1,1,1), while the neutrino mixing matrix is something much more complicated, somethin with entries like this:

    \[U_{PMNS}=\begin{pmatrix}\square & \bullet & \cdot\\ \circ & \bullet & \square\\ \circ & \bullet &\square\end{pmatrix}\]

The CKM is more ore less diagonal, but it also seems to have substructure

    \[U_{CKM}=\begin{pmatrix}1 & \square & \circ\\ \square & 1 & \cdot\\ \circ & \cdot & 1\end{pmatrix}\]

Furthermore, it seems that somehow mixing angles of these two matrices are complementary to each other, approximately it seems that \Theta_{CKM}\sim\theta_{PMNS}+\pi/2. Nobody knows why, and the third mixing angle of the PMNS matrix, \Theta_{13} was recently (a few years ago) measured. Also, there is some hints of CP-violations (naturally expected from the SM) in the PMNS matrix (something that is well-tested in the quark setting). Did you know we have more neutrino unknowns? Neutrinos are the ONLY fermionic fundamental field in the SM. Beyond knowing its mass, we do not know yet if its spectrum is normal (atomic-like) or inverted. We do not know if neutrinos are Dirac or Majorana particles. That is, neutrinos could be the only fermion in the SM that are their own antiparticles (a very bosonic trait!). Why does it matter? Well, if neutrinos are their own antiparticles, we could in principle understand why there is no almost antimatter in the observable Universe. To explain it, we should be able to know how to cancel out matter and antimatter with a difference of 1 part in 10^{10} such as 10^{10}\neq 10{10}+1. Otherwise, the whole Universe would be very different or it would not exist!

Well, time to go with asymptotically freedom, quark confinement and gluons…

Part(III). A short guide of QCD. From quarks to quark-gluon plasmas.

  • Main two features of QCD: asymptotic freedom and confinement. Asymptotic freedom is just the contrary behaviour of the screening in QED. Strong coupling decreases with decreasing distances! That is very antiintuitive. We are familiarized with forces that increase with distances, strong forces are different. At high energies, short distances, you are essentially “free” of strong force. Subtle. Specially since we call strong force, strong force. Confinement is the weird feature of strong forces making free quarks (or color charges) invisible. You would ask then how do we know quarks exist afterall if they do not exist as isolated objects. The main prove, beyond all the QCD evidence, is the jet structure we get from particle collisions. No free quarks sorry, but quark bunches! Wibbly wobbly quarky quark stuff.
  • Hadrons come into groups: baryons and mesons. Multiquark states for N>3 and even quarkless states (e.g., the glueballs or gluonium) are a known hot topic in QCD.
  • The model of quarks and partons. Partons were introduced by Feynman and Bjorken even before the quark theory was finished. Protons have complex structure. At low energies, you can naively imagine hadrons as valence quarks, but at higher energies hadrons are made from valence quarks PLUS other wibbly wobbly timey wimey stuff. Oh, yes! Microscopically zooming a proton is a fantastic journey itself.
  • The strongness of strong interactions. Why are the strong forces the maximal forces? Well, fortunately for your atomic nuclei and protons, it is so.
  • Quark-gluon plasmas (QGP). At billions (american) of kelvin, you get a wild soup of quarks and gluons unbounded. This is the quark-gluon plasma. It behaves as a perfect fluid.

The Manhattan project, I think you do not know this, provided a fund for particle physicists during and after the second World War. In the 1950s, many investigations provided a surprising subatomic subnuclear world. Many types of particles and resonances arised. Just like Mendeleive built it a periodic table for elements, particle physicists had to creat a big frame for the world of hadrons they were discovering. A famous joke or affirmation those years was that the Nobel Prize was associated to the discovery of new hadron states/particles. Murray Gell-Mann (RIP), and Neemann (RIP), independently by Zweig discovered a way to classify hadrons into schemes using group theory and quantum numbers. Zweig “ace theory” was not popular even when it was pretty similar, but it is an interesting example of how the same ideas arise in different people at the same time, and names do matter to sell your research| The eightfold way was the pavement for the establishment of quark model, and the rise of the QCD as a gauge theory beyond the S-matrix formalism reigning in the 60s. For instance:

You can do hadron spectroscopy with particle physics! I wrote about the names of those hadron states here on TSOR, you can search for them. It is quite curious how many curious unstable particles you have. Funny fact, the omega baryon was taken as inspiration for some Star Trek episodes, like the Omega directive related to Omega particles making warp travel impossible. Any hadron is very complex. Protons at very high energies are messy stuff, you can see inside protons other quarks and strange objects. For energies similar to the proton mass, you can yet keep the 3-ball (uud) making the proton (and similarly with other baryons). BUT, the valence quark picture is only an approximation for certain scale. The quark model arising in the 1970s was deliberately precise to understand the color charge (and why states like the sss or the \Delta particle exist. Quark model is in debt to O. Greenberg parastatistics, an exotic (yet today!) topic related to quantum statistics beyond fermions and bosons.

By the other hand, strong force is so weird due to confinement. Try to ionize a proton just like you do with an atom. Well, unlike to atoms, you get a quark-antiquark state very soon. This feature is similar to particle pair creation due to Schwinger effect in strong QED/gauge theory, but you get it from “free” in QCD. Strong force is the most quantum force of the 3 interactions of the SM. The absence of free quarks is very similar to the absence of magnetic monopoles. Just note that:

    \[\dfrac{E(proton)}{(2m_u+m_d)c^2}\sim 100\]

Thus, quarks are very relativistic as well!!!!! Reamarkly, the same operation done over the hydrogen atom gives you E/(M_e+M_p)c^2 about 10^{-5}. So, unless it is excited or heavier nuclei are considered, simple atoms are not generally relativistic at the level of the binding energies. That the strong interaction (SI) is very quantum and very relativistic is a known fact. Beyond the parton model by Feynman, coding some structure functions for hadrons, there are some interesting simplified models in the description of quarks and gluons. Maybe, the most important model is that of strings. Yes, string theory was born as a strong interaction theory. Hadrons are just flux tubes of color, trapped, and wibbling and wobbling wildly. It shows that the flow of gluon lines carries out almost every the energy of the quarks and hadrons. Indeed, the 99% of matter is due to gluonic field interactions or the color flux tubes. That hadrons are just balls tied up by strings is a useful picture but not too fair today, and spin two states are just and oddity in nuclear and particle physics. However, spin two interactions are known to be those interactions caused by gravity and gravitons, so string theory transformed itself into a theory of everything, as it keeps (but strangely uncomplete) today. You can hear some discussions between theories, and some blogs about that gravity is the square of some Yang-Mills theory. Well, it is not quite precise to say it so simple, but it works in some theories and models, so keep an eye on that.

Other interesting QCD model is that of the spring (string) tension. Essentially, confinement is linear with constant tension. The tension of any hadron is essentially about 1GeV/fm=10^5N. However, if you hit a proton at about 10TeV of energy, and at about a distance of an (deci)attometer, the tension would be instead 10^{12}N or 10^{13}N. Taking into account that tension is the Young-modulus times the strain (or the tension is the pressure times the sectional surface of any material), you get that hadron tension is huge, very huge. It can be compared to that of graphene tension in a meter from stiffness, or that of steel in one cm^2, for the greatest experimentally tested values! Thus, it is not surprising that nuclei and hadrons are so stable, aren’t they? And however, the string/spring model is pretty simple explanation of all of this. You get


    \[F_q=\sigma=constant\;\; V_{string}=kr\]

and at shorter distances, you would get asymptotically freedom deconfinement via the potential

    \[V_d=-\dfrac{c\alpha_s}{r}+\sigma r\]

where c is some constant (generally written as 4\lambda/3) and units of V are in GeV. I would like to note that non-perturbativeness is essentially a key property of confined QCD. It yields exponential terms giving rise to particle pair creations similarly to Schwinger effect, via


up to unit conversion constants!

Well, quak-gluon plasma has a temperature about 10^{12}K. Take the QCD main typical energy, the chiral vaccum of QCD has energy about 100 MeV, this is about 10^{-11} joules. Use the Boltzmann constant to turn this energy into temperature of fundamental quark-qluon melting, and you get that trillion (american, billion european) of degrees. Similar estimates can be done for the normal electric plasma (to get millions of degrees, easy if you consider that typical atomic energies are about 1 eV), or even more, you can guess the ultimate hot temperature, the Planck temperature from this kind of arguments, about 10^{32}K. What are the properties of QGP? A simple list:

  • It behaves like an almost perfect fluid (without friction!).
  • It is not gluon transparent.
  • It has a complicated phase diagram, much more complicated and subtle that was initially expected.
  • QGP was the main composition of the Universe when the Universe was about between one picosecond up to 1 or 10 microseconds,…Then, protons and neutrons and other hadrons were confined.

As an image is much better, let me show you some representations of the QCD phase diagram (similar to the water phases you study at school):

Let me remember you a thing, the strong coupling “constant” is about 0.118 at the LHC energies, and vacuum Feynman graphs should be counted with care as we have 3 colors and a non-abelian (non-commutative) gauge theory! However, beyond some interesting properties, there is a general framework for the full SM and now we will its full power…

Part(IV). The power of the SM.

The SM lagrangian is, formally simplified, a sum:

    \[L_{SM}=L_m+L_g+L_{int}+L_{H}=i\overline{\Psi}\gamma\cdot D\Psi+F^2+G^2+g_Y\overline{\Psi}\Psi\phi+\vert D\phi\vert^2-V_\phi\]

The simples Higgs potential dynamics is encoded via


Further interaction terms, more complicated, are allowed in BSM theories, but the SM is just a g\phi^3+\lambda\phi^4 from a perturbative viewpoint.

The SM does NOT contain gravity. It only codes 3 out 4 fundamental interactions. However, the framework is self-consistent (up to some technical problems we do not know how to solve yet) with the Higgs mechanism (SSB plus the Higgs potential). Interactions?

  1. Strong force by 8 gluons.
  2. Weak force by W and Z bosons.
  3. Electromagnetic force by photons.
  4. Mass giver to elementary particles with Higgs and interaction terms by Yukawa-like interactions not coming from any symmetry but from the own Higgs mechanism and dynamics.

Gravity is a force apart from the SM, even when you can in principle calculate graviton scattering processes. Taking into account loop corrections is a nightmare with gravity. Feynman graphs blow up in number and you can not control the destiny of the nasty infinite terms spoiling renormalizability. Thus, we are hoping a new BSM theory will help us to solve this. QG as superstrings/M-theory or LQG were designed to live with these problems better. Today, they have helped to some theoretical fundamentally mathematical details, but we lack stringy/loopy experimental support. We have, however, three mysterious generations, the 6 quarks and 6 leptons, 4 gauge bosons and the Higgs-like boson at 125 GeV saying us that something else is required, but we do not know how or what is it. This post is already showing you:

  • There are known knowns.
  • There are known unknowns.
  • There are unknown unknowns, likely, out there, hidden in the noise of our current data.

How does the LHC work? Take water, tons of water, and spoils the electrons with electric fields. Electrolysis apart, you need high frequency electric fields. Then inject particles with frequencies between 40MHz to 10kHz. Create tubes of low temperature (1.9K or about 2K) at the LHC main tubes. Get superconductivity with about 8.3T magnetic fields. From ionized hydrogen atoms, got about bunches of protons (10^{11}p per bunch. Take about groups of 3000 packets and insert them into a 26 km long collider by sucessive injections. At 40 MHz, you could show that protons circle the full LHC about 11245 times per second. Then, build up some cool detectors (ATLAS, CMS, ALICE and LHCb are their names, plus some new detectors under construction to test other fantastic theories). Then, build up some cool wonderful informatical system and architecture, such as you can pile up data with detector times about 3 microseconds. Note than you can even “detect” particles like the tau with about 300 femtoseconds of timelife, response times of 25 ns and even more, you can indeed hint the existence of very short-lived resonances like the Higgs, the top quark and other known particles with lifetimes of 0.1 yoctoseconds!!!!! But you see at energies, not time detector responses. Essentially, any particle collider is a time-series for the energy-mass-frequency and the number of events, after good statistical and experimental analysis. That is.

If you think all this is useless, let me talk you…About history and later about medicine. Any particle collider has transversal applications. A complex detector like ATLAS or CMS are a set of wires, electronics and aparatus with high-end applications. Of course, you can think that a 14 G€ machine is expensive. But you must think globally. The LHC has cost every european citizen only a few euros per person. Does it deserve the spending of the money? Why to make 9T magnetic fields and 15 meters wires/detectors in size? The same question arises in different time periods of human history. Nobody, for sure, predicted that GR and the gravitational correction to time measurements would be necessary to get you not lost with the GPS system, but it is true. You can not get GPS devices work properly if GR were not correct. There, at speed of 14000km/s satellites, and times of a few 7-38 microseconds, the gravitational corrections by GR are necessary to get a proper position in the sea, in any remote part of the globe. What about accelerators? First use of accelerators were, in fact, TV monitors. Did you enjoy old TV? Cathodic rays by J.J. Thomson were a basic tool for the first TV designs. Van de Graaff generators are yet common tools for showing the effects of electricity, but you also have other interesting accelerators named pelletrons and the one by Gockcrofth-Walton. There are also linear colliders (LINAC) used for alimentary uses, cyclotrons in nuclear research and medicine, and synchrotons, … Synchroton radiation is important in medicine, but also in material reseach and electron microscopy. You can erase microbes and clean materials with synchroton radiation, and what about those X-rays sources? What about radiation therapy? You have now PET and proton therapy too! You MUST know an important thing. PET arised in the LHC site previous collider, the LEP. And much of current collider technology, based on calorimetry and crystals at the detectors, is being reused for prediagnosing terrible illness. The production of radiodrugs is also made with collider-aided tools. Ionizing radiation detectors are also important pieces of technology natural in colliders that find a refurbished use in medicine. Ionizing particles are naturally charged. Radiation is counted or detected by semiconductors (diods). You can find out beyond nuclear weapons a full set of pacific uses of radiation tools: archeology, vulcanology, nuclear safety from nuclear reactors, fire detectors,…How can we detect radiation?

  1. Ionizing radiation is detected with counters (gas or semiconductors) and dosimeters using photographic thin films.
  2. Exciting radiation is detected with thermoluminiscent materials, sparking counters and other gas detectors.

Who had said to Dirac that antimatter would be useful for Positron-Electron-Tomographies when he derived his famouse equation

    \[\left[i\hbar\gamma^\mu\partial_\mu-e\gamma^\mu A_\mu-mc\right]\Psi=(i\hbar \gamma\cdot D-mc)\Psi=0\]

Medical imaging is a wonderful branch of particle physics. Beyond ecography, NMR (nuclear magnetic resonance) requires high magnetic fields, non ionizing radiation and nuclear relaxing. X-rays are ionizing (that is why you can not X-ray yourself every day), the TAC implies higher dosis but they are not too much used per year per person, and PET is important in oncology with low resolution, and fluoride-18 isotopes essentially. Neurology and cardiology are benefiting from particle physics too. Furthermore, isotopes are necessary for those devices, so we need curiously one of the most fascinating predictions of Mendeleiev table 150 years ago: technetium. Technetium sources are required in many medical nuclear resources, not only in PTAC or PET+TAC, but also in SPECT, using the metastable technetium-99 isotope. Thus, radiation therapy, historically using X-rays have evolved into using a more multiparticle setting. You can use not only limited gamma rays for some processes, you can also use electrons and proton particles for therapy. Proton therapy is new and promising, specially promising with the great precision of 220 MeV proton residuals. Old therapies based on Co-60 are known to have generated patient issues some decades after the treatment. Proton therapy machines are now in top hospitals. In principle you could use any “soft” particle for therapy. Neutrinos? Carbon atoms? The CERN is aware of all of this, and it has some multidisciplinar projects like ENLIGHT and BroLEIR about how to simulate proton radiation and the efects of proton therapy or oxygen-16 to cure your health. Gammagraphies are also a medical tool, dosis are important though. Dosis is defined as radiation energy per mass unit. And radiodrugs specifically desiged for customized treatments are on the way. So, please, neve say that particle or fundamental physics is useless. You have induction kitchens and microwaves at your cooking stations thanks to radiation studies! Nuclear transmutation is currently possible and the alchemy promises of ancient times secure your health if done properly. You can forget what a barn is (a particle physicist unit of area equal to 10^{-24}cm^2, or that the top quark or Higgs mass are about 173 and 125 GeV, but you should know particle physics does matter in your timelife. Even if you don’t know or don’t find new physics because your job is very different, the search for SUSY, Dark Matter, extra dimensions, black holes, surely will affect you collaterally. Particles colliders are not designed to produce, in the high energy physics community, a single concrete particle, but medical applications are different. Now, while reading these lines, you are being crossed over by billions of neutrini, and for some muons.

Part(V). General Relativity and the LCDM model.

  • General relativity=Equivalence principle+SR. GR=EP+SR.
  • Curvature=Energy-Momentum.
  • Gravity=Pseudoforce=Geometry.
  • There are gravitational waves.
  • Gravity is weak generally, but it can also be strong at big masses or high densities.
  • GR needs quantum gravity when the Schwarzschild radius (or gravitational size) equals to the quantum size (Compton’s wavelength). That is reached (naively) at the Planck length, 10^{-35}m. Check:

    \[2\Lambda_Q=R_s=L_P\leftrightarrow\; L_P^2=\dfrac{G\hbar}{c^3}\]

  • Spacetime tells matter-energy how to move, matter-energy tells spacetime how to curve (warp). There is no torsion in classical GR, but you can include it to get Einstein-Cartan theory with nonsymmetrical energy-momentum or Einstein tensors.
  • The large structure of the spacetime and the LCDM standard cosmological model. This is the current analogue of the SM for the largest cosmic structures we have today. It predicts lot of things, and explain current data as fair as the SM.
  • We need dark matter (or MOND/MOG) to explain galactic rotation curves and elliptical galaxies dispersion speed:




with \Sigma=M/R^2, a_0=G\Sigma/2=GM/2R^2.

  • The Universe is expanding with H=0\sim 70km/s/Mpc at large structures. There are some divergences between the value of the Hubble parameter H_0 at current time. But average, it is about 70 in conventional units (km/s/Mpc).
  • The Big Bang model: the cosmic microwave background and its anisotropies confirm the LCDM previsions.
  • Current CMB temperature is about 2.73K up to some anisotropies in the sky. It is also expected a cosmic neutrino background at about 1.945K or less (depending on the extra non-SM neutrini and other BSM physics). The relic graviton background is also expected at about 0.9 K or lesser. T_{CgB}\sim\sqrt{2/N}T_{CMB}. You get the 0.9 K counting the particle species degrees of freedom of the SM. If there were additional particles, the cosmic graviton background would be lesser than 0.9K.
  • The Universe density is close to the critical density, about 1proton per cubic centimeter, or about 1 electron per litre. Primordial fluctuations were the seeds of current irregularities given by galaxies and other cosmic structures.
  • The universe is flat (euclidean) at cosmic scales (despite being intrisically curved spacetime). This recalls for inflation.
  • Dark matter, if real, should create a wind or flux with speed of 300km/s.
  • The main evidence of cold dark matter comes from flat rotation curves in spiral galaxies and the dispersion of speed in elliptic galaxies. Zwicky, however, found evidence of this in 1933 (more mass was required to explain galactic motions), and Vera Rubin confirmed it in 1975. There are evidence of DM due to gravitational lensing observations or the Bullet Cluster.
  • Galaxies are “flat” due to interactions with matter and gravity.  The role of DM in galaxy formation and evolution is yet a hot topic of research.
  • Simulations of the Universe with DM and or Dark Energy are consistent with the LCMB paradigm. However, it does not imply that models without dark matter can not exist. However, DM evidence is from different sources, not only a single one. Simulations have the power to discriminate some models.
  • Dark matter particles is collisionless and do not cluster, but it forms haloes. Halo dynamics is however poorly understood. It is believed to be spherically symmetric, but we do not know for sure.
  • Supernovae type Ia measure the Hubble constant and hint that the Universe is accelerating, not decelerating.
  • Dark energy or the cosmological constant remains a big puzzle even today. What is it? Quintessence? Phantom energy? The mere vacuum energy? 70% dark energy, 25% dark matter, 5% normal matter. We are a mere 5% in the Universe. DM are likely some neutral particles not in the SM (one or several types!). We have some ideas of what DM is but no prove of their existence by direct production has been managed till now.
  • GW plan to measure the Hubble parameter with precision in order to spoil some tensions in the current data. H_0=\dfrac{\dot{a}}{a} at current time t=0.
  • Friedmann equations. For homegeneous and isotropic Universes, like ours, the GR field equations can be recasted into two simple equations, called Friedmann equations:

    \[\left(\dfrac{\dot{a}}{a}\right)^2=\dfrac{8\pi G\rho}{3}+\dfrac{\Lambda c^2}{3}-\dfrac{\kappa c^2}{a^2}\]

    \[\dfrac{\ddot{a}}{a}=-\dfrac{4\pi G}{3}\left(\rho+\dfrac{3P}{c^2}\right)+\dfrac{\Lambda c^2}{3}\]

  • Known issues: singularities, nature of dark matter and dark energy, the finding of the cosmic neutrino background and the relic cosmic graviton background, the finding of the stochastic gravitational wave background, the finding of the inflation signatures of the universe, the test of the multiverse ideas (possible?)., the final fate of the observable universe (will protons decay? will the space-time disappear?what will black holes leave after they evaporate?).

After the two modern revolutions of the 20th century, physicists are yet stunned by their precision. Relativity (both special and general) and Quantum Physics (in its ultimate current form, the Standard Modern) rule with surprisingly accuracy and precision in the realm of experimental physics.

2 pillars, ying and yang of physics, relativity (general relativity) reigns into the macroworld, while the quantum mechanics (the Standard Model) remain unbreakable at the microworld. Where do we stand in our search for the ultimate theory? Let me trip with you into an overview of what we do we know (more or less) that is true at both extreme theories and scales.

What is time? What is space? What is mass or energy? What are the fundamental forces? These questions, even when translated into the quantum are classical as well. In fact, it was the genius of Einstein and many other great scientists what operationally say what they are (more or less, since our scientific knowledge is provisional).

Special relativity (SR) was build in order to unify the laws of mechanics and the (galilean) principle of relativity with those laws of electromagnetism (specially, the symmetry of Maxwell equations). What is motion? The change of position in time with respect to some reference frame. What is time? Just a parameter or coordinate in a four-dimensional set-up. It shows up the the marriage of classical mechanics and electromagnetism (in 4d) can be done, saving the relativity principle up to a cost. In unidimensional time relativity, 3-speed (4-speed) are limited to signals lesser or equal to the speed of light. In any space-time diagram, you get a cone when propagating at the speed of light. Light signals relate space with time, and light also relates mass with energy:



However, this fact also implies that Newton’s gravity can not be right. Reason? Gravity propagates instantaneously in newtonian gravity! That is forbidden in special relativity. Einstein realized this, and he had to struggle with superior mathematics to find out a theory locally consistent with special relativity containing gravity. This theory, a locally special relativistic theory of graviy is what general relativity (GR) is. However, the theory proved itself to be greater than his own creator and inventor could ever imagined at his lifetime. If you plug G_N=0 or c=\infty in GR, basically (there are some nasty stuff like we saw in come recent log), you recover SR from GR. GR is a theory that models spacetime with a metric field. A metric field is a matrix, usually symmetric (I will be only considering as is usual gravity without torsion here, so the Einstein tensor remains symmetric). Flat spacetime is just the normal Minkovski metric (a diagonal matrix!):


up to a global sign convention. This GR theory is fascinating. It explains gravity as a curvature of spacetime. In fact, the deviation of the circle usual length by curvature is due to gravity itself:

    \[\dfrac{Perimeter}{Diameter}-\pi\sim \dfrac{G_NM}{c^2r}\]

Gravity, being a force, it is really a pseudoforce. Why do we want quantum gravity? It is not only why we want something else that calculate the cross-section of 2 gravitons transforming into 2 photons (Skovelev):

(1)   \begin{equation*} \sigma(GG\rightarrow\gamma\gamma)=\dfrac{k^4\omega^2}{160\pi}=\dfrac{\pi d_S^2}{10}\end{equation*}

For the electron at rest, it would be very tiny, \sigma\approx 10^{-110}cm^2.

Exercise: derive the above cross-section (be aware of identical particles effects) from


Exercise (II): check the graviton-photon to graviton-photon formula and tell what is the main problem it has compared to the previous formula.

    \[\sigma(G\gamma\rightarrow G\gamma)=\dfrac{k^4\omega^2}{64\pi}\dfrac{1+\cos^4(\theta/2)}{sin^4(\theta/2)}\]

It is not only that strong fields allow particle pair creation rates, e.g., via the Schwinger effect

    \[\Gamma_S=\dfrac{e^2E^2}{(2\pi)^3}\sum_{n=1}^\infty e^{-\frac{\pi m^2c^4}{e\hbar E}}\]

GR is uncomplete. However, GR is a wonderful theory. As SR. SR says:

    \[\mbox{Proper time}=(\mbox{Time})^2-(\mbox{distance})^2\]

using units in which c=1. The time measured by a rest clock is equal to certain combination of the time of the clock in motion minus the distance, using something similar to euclidean triangles. They are indeed hyperbolic triangles. GR says space-time is elastic and dynamical. And the equivalence principle, in any of its forms, say in the end that gravity is just curvature or geometry. The shape of space in time implies that space-time grows somehow. It expands (despite the static preconception of the theory when Einstein created it, it soon proved to predict the Universe as something moving itself!). The Einstein Field theory equations are pretty simple:

    \[G_{\mu\nu}+\Lambda g_{\mu \nu}=\dfrac{8\pi G}{c^4}T_{\mu\nu}\]

The vacuum density energy is yet a mystery, so that is one of the reasons of why to go beyond GR:

    \[\rho_\Lambda=\dfrac{\Lambda c^4}{8\pi G_N}\]

Essentially, its value today is about a proton per cubic meter (do the numbers yourself!), and it is very similar (coincidence problem!) to matter energy density or dark matter energy density today. But it was not so in the past! The cosmological constant can be interpreted as a Lagrange multiplier, a volume pressure term or a negative contraterm in the EFE. Nobody knows why it has the value it has today. Curvature of spacetime has a temporal component. The purely temporal components of the curvature trigger the newtonian potential at weak gravitational fields. It also implies the existence of gravitational fields/waves traveling at the speed of light…Or, is the light the one who travels at the maximal possible speed in local spacetime? The stiffness of spacetime is inversely proportional to G_N. It is pretty big, so you need big masses or densities in order to make curved spacetime effects to appear clearly. GW were so weak, we had to search for them into the noise on Earth (in space the story is completely different, but this is the subject of a future blog post!). Just like the LHC probes zeptometric scales, GR probes bigger scales. The SM has a precision of 1 part into 10^{10} or so. GR has similar values of precision in some observables. Gravity bends spacetime and triangles are, in principle, curved (it yields that the largest spacetime structures of the Universe are euclidean though! This flatness can be solved hardly using inflationary ideas). What else? Einstein theory of gravity, GR, predicts as well:

  • Precession of orbits. Abandon your keplerian world. Ellipses precess.  The critical case in the solar system is the famous Mercury orbit. Was the genius of Einstein realize that his theory could explain Mercury without dark matter, Vulcan.
  • Gravitational time delay. Closer you are to any heavy mass, slower is your time flow.
  • Gravitational lensing of light. Eddington checked this a century ago, in 1919. Einstein was already a celebrity, but this confirmation of his GR theory elevated him to the level of God of Physics.
  • The existence of vacuum solutions we call black holes. Even when other people had speculated about black stars before in newtonian gravity, GR contains naturally solutions with features or darkness. We do not understand their ultimate destiny. That is another reason why GR is not complete.
  • Deviations from euclidean geometry is measured with angular measurements:

    \[\alpha+\beta+\gamma-\pi\sim \dfrac{GM}{c^2r}\]

  • The universe is expanding, and it likely had a beginning in time.
  • The universe has a vacuum energy that is not null (it is a 20 years old rediscovery of Einstein’s cosmological constant).
  • Gravitational waves exist (radically new astronomy by LIGO is going on at these moments). The era of multimessenger astronomy is just beginning.
  • Black holes are real things. Recently, we picked up the photo of M87 and SgA*, our galactic BH, is being analyzed right now.

Let me show you some GW formulae:

  • Gravitational luminosity formula reads

    \[L_{GW}=-\left(\dfrac{dE}{dt}\right)_{GW}=-\dfrac{G}{5c^5}\langle\dfrac{\partial^3Q}{\partial t^3}\rangle\]

such as, for a binary circular system


for M=M_1+M_2 and \mu=M_1M_2/M.

  • Temporal radius reduction due to GW emission and the coalescence time are given by:

    \[\dot{a}=-\dfrac{64G^3}{5c^5}\dfrac{\mu M^2}{a^3}\]

    \[\tau_c=\dfrac{5}{256}\dfrac{c^5a_0^4}{G^3\mu M^4}\]

Is the LCDM model fair? Yes:

  • It predicts all the GR classical tests, as it is basend on it.
  • It predicts the observed expansion of the Universe.
  • The Universe is spatially homogeneous and isotropic. The perfect cosmological principle is wrong. Universe is not aethernal. A restricted cosmological principle holds, though, today. We are not special. At large scales, the Universe look the same everywhere.
  • Scale ractor measures expansion via R(t)=a(t)R. GREFE for LCDM reduce to the Friedmann equations.
  • Critical density is close to the cosmic density, \rho_c=3H_0^2/8\pi G. Knowing H_0 allows you to measure the age and size (up to scale factor) of the Universe. Knowing H_0 and G allows you to compute the critical density for the Universe to collapse. We are close to that value, but accelerating Universe is diluting galaxies into vacuum. To calculate the critical density, take the Hubble law v=HR and equal kinetic energy to potential gravitational energy:

    \[\dfrac{1}{2}M_UH^2_0R^2=\dfrac{4}{3}\pi \rho G R^2\]

and then \rho_c follows straightforward from elementary algebra.

  • Some parts of the cosmic history are yet sorrounded by mystery and unknowns. We have to live with ignorance there. Planck era or inflation era are  yet hard to test. We believe we understand the QCD era, more or less, but not with absolute safety.
  • Light element abundance is another great prediction of LCDM. Indeed, it fits nicely to observations. We have to check some early Universe young stars here, the famous population III. The James Webb Space Telescope (JWST) will be looking at Pop III stars and it will show us wonderful things for sure. Current universe is old, stars are second or third generation stars, like our sun (3rd generation star).
  • GW are there. We will see the Universe BH and other fantastic GW sources invisible for light with these tools. GW evidence before LIGO discovery 3 years ago was found in pulsars (1993 Nobel Prize).

The Big Bang happened not in a single place, but everywhere, in a single moment of time, 13.7Gyr ago. There is no centre of the Universe. We see the past of the Universe during the sky night. During the first seconds, the Universe created radiation, then arised the first particles via QCD decoupling, later it created nuclei and elements (primordial nucleosynthesis), and, finally, we got astronomically/astrophysically bound objects like galaxies, clusters,…Evidences from the Big Bang are the elements that form everything, radiation from the CMB and its anisotropies, structure formation…The nuclei areised after the 3 first minutes, and in about 380000 years, recombination was possible and the light creating the elements and the CMB were produced. Inflation is required to explain some puzzles (flatness and anisotropies the main two, but there are other problems difficult without inflation). Inflation naturally requires scalar fields (or similar) exponentially inflating the universe. The matter-antimatter asymmetry is responsible of being us here. Neutrinos can keep likely part of that dark mystery, or even the dark matter partial solution. BH naturally can violate the conservation of baryon number and thus trigger proton decays. Planck mass naturally gives a lower theoretical bound of 10^{45} years (or even longer with care, about 10^{140} yrs) for proton lifetime from BH virtual fluctuations and/or spacetime foam models.

That is all…folks. The end? Choose a final death for our Universe:

  1. Big Freeze (No Freezer will save you).
  2. Big Crunch (No piston will save you).
  3. Big Rip (No gravity will save you).
  4. Little Rip (No soft gravity will save you).
  5. Big Decay (Higgs field unstability/metastability: no force will save you).

See you in other blog post soon!

LOG#216. Asian length units: the list.

Asian units for length: a non-exhaustive list.

1 Chi (China)=\dfrac{1}{3} m=33\dfrac{1}{3} cm

1 Chi (Hong-Kong)=14\dfrac{5}{8}=0.371475 m

1 Chi (Taiwan)=1 shaku (Japan)=\dfrac{10}{33}=0.3030 m

1 chek =0.371475 m

1 tsun =0.1 chek (Hong-Kong)

1 tsun =3\dfrac{1}{3} cm (Taiwan, China)

1 fan= 0.1 tsun

1 shaku (Japan, korean “ja”)=\dfrac{10}{33} m

1 ken (Japan)=1 hiro=6 shaku=\dfrac{60}{33}=1\dfrac{9}{11}\approx 1.818 meters

1 jo (Japan)=10 shaku=\dfrac{100}{33} meters

1 cho (Japan)=360 shaku=\dfrac{3600}{33}\approx 109.1 meters

1 ri (Japan)=12960 skaku=\dfrac{129600}{33}\approx 3927 meters

1 sun (Japan)=10^{-1}shaku=\dfrac{1000}{33} mm

1 bu (Japan)=10^{-2}shaku=\dfrac{1}{330}\approx 3.030 mm

1 rin (Japan)=10^{-3}shaku=\dfrac{1}{3300}m\approx 0.3030 mm

1 mo (Japan)=10^{-4}shaku=\dfrac{1}{33000}m\approx 0.03030 mm

10 chi = 17 hang


LOG#215. Entanglement is the key?

Hi everyone!

Is entanglement the key? A tribute to Ant-Man and Hawking today as the preamble, Quantum Chess playing for heroes like you:

Entanglement is the subject we have today. Entanglement is that spooky weird feature of the quantum realm that stunned Einstein and realist scientists believing that reality is a preexistent “thing/stuff/entity”. Going more precise, entangled states are related to quantum states that are the product of complex systems made of parts or quantum states being composed from states of single systems. Let me introduce a little bit terminology:

  • Pure states.
  • Mixed states.
  • Separable states.
  • Entangled (non-separable!) states.

Entanglement is related to the above 4 types os states. Quantum Mechanics as we know it today is based on some basic axioms:

  1. Superposition (linearity) of quantum states.
  2. Heisenberg uncertainty principle (HUP).
  3. Unitarity.
  4. Projection postulate.
  5. Quantum composite systems or states can be made up from tensor products of single systems. What a tensor product is? It is a way to create some matrix states from two or more ingle matrices. It is not the only way, but it is the one that works.

Take for instante N=2 (two party, two subsystems creating a big one system). The Hilbert space of the composite two-party quantum system is made from the tensor product of H=H_A\otimes H_B, i.e., the Hilbert space is the tensor product of the two sybsystem Hilbert spaces. Then, the quantum states of the composite system are given by:

    \[\vert\Psi>_{AB}=\vert AB>=\sum c_{ij}\vert i>_A\vert j>_B\]

Then, it is that a state is separable IFF you can find out vector \vert c_i>_A and \vert c_j>_B such as c_{ij}=c_i(A)c_j(B) in the previous expansion. That is, if you can factorize the state as a single product of the two single systems quantum states the state is separable, otherwise the state is ENTANGLED. You can generalize the above definition to any number of systems (parties!). The general n-party quantum state is defined as certain tensor product of the subsystem quantum states as follows:

    \[H=\bigotimes_{i=1}^n H_i\]

(1)   \begin{equation*} \vert A_1\;A_2\;\cdots A_n>=\sum c_{i_1i_2\cdots i_n}\vert i_1>_{A_1}\vert i_2>_{A_2}\cdots \vert i_n>_{A_n}\end{equation*}

That’s entanglement!!!! You would say, then, why is it “hard”? Well, there are several reasons why entanglement is hard and why entanglement does matter  A LOT in QM affairs. Let me start for the first item. Why entanglement is hard? A list:

  • Entanglement is a subtle non-separability meaning certain non-locality compatible with special relativity. Yes! It is true. Entangled states have certain abilities that allow you to do magic at very large distances but causality and finite propagation of signals are not violated.
  • Bell’s theorem (more on this later). Bell found out that the existence of entangled state in QM allows you to test the existence of hidden variable theories. It yields that QM holds superb. Unchallenged. Bell experiment kills any hope for local realist theories. You need a very special type of theories if you can mimic QM results of Bell-type experiments. They need to be contextual. Reality is not independent from the way we measure it, and indeed, there are systems with act as if they were not independent from their parts even when separated to km of distance. Chinese people have indeed build up a satellite using entanglement to secure communication.
  • Currently, the EPR (Einstein-Podolski-Rosen) experiment, the type of experiment Bell inded realized has been focused by quantum gravity theorists due to the black hole information problem and the nature of gravity. Van Ramsdook proposes that gravity “is” entanglement, and Susskind and collaborators are developing an idea summarized in the formal equation ER=EPR. ER is Einstein Rosen bridge in General Relativity. ER=EPR states that quantum entanglement is caused by two (or more) quantum particles being connected by (micro)wormholes (Einstein-Rosen bridges!). That quantum entanglement could be caused by non-simple connected quantum microwormholes is just quite an statement. Hard to experimentally test. Van Ramsdook indeed suggests the gravity itself is caused by entanglement.

The relationship between gravity (“classicality”) and entanglement is an old friend. In fact, there is another point where this idea arises, but I am not sure my readers will know it. Some time ago, Rigolin’s proved that a high number of entangled particles can beat the Heisenberg Uncertainty Principle bound. Even more, he conjectured that in the limit of an infinite number of entangled particles, you get “classical” zero dispersion. That is, with an infinite number of entangled particles, you could in principle ban the uncertainty relationship. From this viewpoint, (the amount of) entanglement REDUCES uncertainty. Reciprocally, separability enlarges uncertainty. You can read the Rigolin original work here http://cds.cern.ch/record/499980/files/0105057.pdf. Wait, what if you modify the HUP by some generalized form of it like EHUP, GUP or EGUP? Logical thoughts impose here: EHUP and EGUP or GUP make the system more quantum and less classical, enhancing the bounds reacting against the reduction of uncertainty of very large number of entanglement particles. GUP, EHUP and EGUP have the opposite effect to entanglement and make more uncertain the entangled states. See about this here https://arxiv.org/pdf/1706.10013.pdf

You can also read that noncommutativeness (as a bonus) makes entanglement and nonclassicality more evident in the paper: https://arxiv.org/pdf/1506.08901.pdf 

And now? We return to some vocabulary! N-level pure states are defined formally as quantum states

    \[\vert \Psi>=\sum_{i=0}^{N-1}c_i\vert i>\]

Thus, pure states are simple linelar superpositions of quantum states! You can bet qubits with N=2, qutrits with N=3, and qu\inftyits with N=\infty (quantum fields!). Even more, you could add a continuous term as well and spoil the finite term sum. Of course, entanglement of infinite dimensional systems is not usual in standard discussions of quantum computing, but it can be added without generality loss. What about mixed states? Well, we need a new gadget to explain mixed states. This new device is the density matrix. For pure states, the density matrix is a set with copies of the N-level system. For pure states the density matrix reads

    \[\rho=\sum_i w_i\vert i><i\vert\]

where \sum w_i=1 by probability conservation. Now, take a N=2 party system. If separable, then you can write by definition the density matrix as the following tensor product:

    \[\rho = w_i\left[\overline{c}_{ij}c_{ij}\vert ij>(A) (A)<ij\vert \vert ij>(B) (B)<ij\vert\right]=\sum_i w_i\rho_i (A)\otimes\rho_i (B)\]

and where \sum_j\vert c_{ij}\vert^2=1 and we can generalize this to N-party systems as

(2)   \begin{equation*}\rho=\sum_i\omega_i\rho_{i_1}^{A_1}\cdots\rho_{i_n}^{A_n}\end{equation*}

for separable states with

    \[\sum_j\vert c_{ij}\vert^2=\sum_i\omega_i=1\]

by probability conservation once again.

Next step is to define the so-called reduced density matrix. It is a density matrix created from the big one tracing over a simple or more subsystems. For a single reduction:


and for the reduced density matrix tracing over A (N=2 party case) you get

    \[\rho_A=\sum_j <j\vert_B\left(\vert\Psi><\Psi\vert\right)\vert j>_B=\mbox{Tr}_B\rho_T\]

and similarly you could get the reduced density matrix tracing by A states.

Entanglement example 1. Bell states.

Take N=2, two level system. H_A=\left[\vert 0>_A,\vert 1>_A\right] is the A basis and H_B=\left[\vert 0>_B,\vert 1>_B\right] the basis for quantum states of the B system. For the composite system, tensor product, you can find out 4 interesting Bell states that are entangled and can not be decomposed into single products of basis states. They are:

(3)   \begin{equation*}\vert BELL>_1=\dfrac{1}{\sqrt{2}}\left[\vert 0>_A\vert 0>_B+\vert 1>_A\vert 1>_B\right]\end{equation*}

(4)   \begin{equation*}\vert BELL>_2=\dfrac{1}{\sqrt{2}}\left[\vert 0>_A\vert 0>_B-\vert 1>_A\vert 1>_B\right]\end{equation*}

(5)   \begin{equation*}\vert BELL>_3=\dfrac{1}{\sqrt{2}}\left[\vert 0>_A\vert 1>_B+\vert 1>_A\vert 0>_B\right]\end{equation*}

(6)   \begin{equation*}\vert BELL>_4=\dfrac{1}{\sqrt{2}}\left[\vert 0>_A\vert 1>_B-\vert 1>_A\vert 0>_B\right]\end{equation*}

They are indeed special in a sense. They are maximally entangled states, i.e., they are the states with the greatest degree of entanglement possible within the composite system.

Entanglement example 2. Bell 4 reduced density matrix.

Take the 4th Bell state:

(7)   \begin{equation*}\vert BELL>_4=\dfrac{1}{\sqrt{2}}\left[\vert 0>_A\vert 1>_B-\vert 1>_A\vert 0>_B\right]\end{equation*}

Trace over the B subsystem:

    \[\rho_A=\mbox{Tr}_B\rho_T(\Psi)=\dfrac{1}{2}\left(\vert 0>_A<0\vert_A+\vert 1>_A<1\vert_A\right)\]

Then, you see that the reduced density matrix for entangled pure ensemble IS a mixed ensemble or state. This result is general, in bipartite systems, \rho is entangled iff the reduced states are mixed rather than pure!

Entanglement example 3. Other entangled states.

For M>2 parties, with two levels, there is a very interesting generalization of Bell states. It is called the GHZ state:

(8)   \begin{equation*}\vert GHZ>=\dfrac{1}{\sqrt{2}}\left(\vert 0>^{\otimes M}+\vert 1>^{\otimes M}\right)\end{equation*}

There are also the so called spin squeezed states, a special set or type of squeezed coherent states. They are important in optics. For 2 bosonic modes, there is the NOON state:

(9)   \begin{equation*}\vert NOON>=\dfrac{\vert N>_A\vert 0>_B+\vert 0>_A\vert N>_B}{\sqrt{2}}\end{equation*}

This is similar to Bell states excepting that the instead the 0,1 kets you have N,0 kets. That is, you have N-excited or N-photons in one mode and 0 photons in the other mode. Well, it shows that Bell states, GHZ states and NOON states are maximally entangled. However, there are other non maximally entangled states. For instance, the previously mentioned spin squeezed states or the twin Fock states. NOON states can also be “phased”, such as you build up a modulated NOON as

    \[\vert NOON>=\dfrac{\vert N>_A\vert 0>_B+e^{iN\theta}\vert 0>_A\vert N>_B}{\sqrt{2}}\]

This state represents the superposition of N-particles in a mode A, with 0-particles in the mode B, and viceversa, shifted by a phase factor. NOON states are useful objects in quantum metrology since they are capable to make precision phase measurements in optical interferometers. Build up the observable A NOON as follows:

    \[A=\vert N,O><O,N\vert+\vert O,N><N,O\vert\]

Then, you can easily prove that the expectation value of A in a NOON state switches between +1 and -1 if phase changes from 0 to \pi/N. Moreover, the error in the phase measurement IS inded

    \[\delta \theta=\dfrac{\delta A}{\vert \dfrac{d<A>}{d\theta}\vert}=\dfrac{1}{N}\]

This is the so-called Heisenberg limit, in fact an improvement over the standard quantum limit (SQL) given by

    \[\delta_{SQL}=\sqrt{\dfrac{\hbar \theta}{M}}\]

The simplest non-Bell GHZ state is made with M=3 parties. GHZ states are used in very important applications:

  • Quantum communication protocols.
  • Quantum cryptography protocols.
  • Secret key sharing.

There is no standard measurement, in a standard way, of multipartite entanglement because, as we saw, there are different types of multipartite entanglement. Indeed, entanglement is not generally mutually convertible. The GHZ state is maximally entangled. For M=3, take

    \[\vert GHZ^3>=\dfrac{\vert 000>+\vert 111>}{\sqrt{2}}\]

    \[\rho_3=\mbox{Tr}_3\left(\dfrac{\vert 000>+\vert 111>}{\sqrt{2}}\right)\left(\dfrac{< 000\vert+< 111\vert}{\sqrt{2}}\right)\]

so you get an unentangled mixed state:

    \[\rho_3=\left(\dfrac{\vert 00><00\vert+\vert 11><11\vert}{2}\right)\]

Thus, this GHZ state has certain 2-particle quantum correlations but there are of “a classical nature” somehow. GHZ leads to striking non-classical correlations too. They allow you to test the internal inconsitencies of the EPR elements of reality. The generalized GHZ state for d-levels is given by the state

(10)   \begin{equation*}\boxed{\vert GHZ^d>=\dfrac{1}{\sqrt{d}}\sum_{j=0}^{d-1}\vert j>^{\otimes q}}\end{equation*}

Maybe you want to experiment with quantum states and quantum entanglement. There is a MATLAB toolbox for exploring quantum entanglement theory. It is called QETLAB. I do not checked, and I am sure there are other similar toys and apps out there. Let me know any way!

Entanglement example 4. W-states.

There is a 3 qubit interesting entangled quantum state called W-state. It is interesting for storage of quantum memories. It reads:

    \[\vert W>=\dfrac{1}{\sqrt{3}}\left(\vert 001>+\vert 010>+\vert 100>\right)\]

For N-qubits the W-state is

    \[\vert W>_N=\dfrac{1}{\sqrt{N}}\left(\vert 0\cdots 1>+\cdots+\vert 1\cdots 0>\right)\]

The W-state is just a linear quantum superposition of all possible pure states with exactly one excited state and the others being in the ground state, weighted with the same probability.

Multipartite entanglement is much more complicated. M>2 entanglement is richer in possibilities than M=2 entanglement. With M=2 there are fully entangled (maximally entangled) and fully separable states. However, things go wild in M>2 parties. You can also have partially separable or partially entangled states. The full M-partite separability

    \[\rho_{A_1\cdots A_M}=\sum_i P_i\rho^i_{A_1}\otimes \cdots \otimes \rho^i_{A_M}\]

is fully entangled when written in this way. But there are also pure states

    \[\vert A_1\cdots A_M>=\vert A_1>\otimes\cdots\vert A_M>\]

and, partially entangled states beyond the fully (maximally) entangled states.

There are other cool measurements of entanglement related to states. They have weird names like tangles or hyperdeterminants! However, in the end, all are expressed in term of pure or mixed states, with certain amount of partial or maximal (or null) entanglement!

What else? Beyond Bell, GHZ, W, NOON and similar states, there are many interesting topics related to entanglement these days. These subjects include:

  • Going from multipartite to bipartite entanglement.
  • Entropy bounds related to entanglement and entanglement entropy.
  • Quantum channels and quantum channel capacities.
  • LOCC=Local Operations and Classical Communication observables.
  • Entanglement distillation (yet, you can distillate entanglement, we have seen an example before!).
  • Quantum teleportation (it is not just like beam me up, Scotty, but it rocks).
  • Quantum cryptography and quantum communication, quantum key sharing.
  • Quantum game theory.
  • Black hole information paradoxes.
  • The EPR=ER and Gravity=Entanglement ideas.
  • Hyperentanglement, i.e., the simultaneous entanglement between multiple degrees of freedom of 2 or more entangled systems.

In summary, quantum entanglement is a fascinating topic. I am sure many of you knew many of these things before. Perhaps, Rigolin’s works and the research involving how to spoil or enhance uncertainty from entanglement and or noncommutativity (GUP,EHUP,EGUP) is the strangest topic I discussed here today. Did you enjoy it? I hope so!

Challenge question: could entanglement affect to gravity and then to time/space measurements? How could you know if your time or space is entangled to mine or to the local time/space measurements in other parts of the observable Universe?

See you in another blog post!!!!!

LOG#214. Supertranslations.

Surprise! Twice in a day!

Yes, I have much to tell yet, alive…Beyond the holographic principle, and things like gravity is Yang-Mills squared or that open strings squared are closed strings, we have had other important development in the theory of black holes and theoretical physics these years, even when Hawking passed away… Let me say I am not going to surprise too much, since many of the readers I am sure they know what I am going to say after reading the title of this entry.



Firstly, let me write a short chronological note:

1965. Weinberg writes a paper on infrared (IR) soft photons (and gravitons). Soft particles are zero energy particles.

1985. Braginsky and Thorne write a paper on gravitational wave bursts with memory. They conjectured it could arise from some collisions of stars and black holes. 1965=1985. What the hell is going on here?

1985. How gravity and other forces act on large scales is temporally abandoned. Lacking of experimental tests. You know.

2014. REsemblance of 1965=1985 works are highlighted by Strominger et alii. Hot point is that of how gravity and other long-range forces out there act on large scales and affect the black hole information paradox! What the hell? Almost 50 years again, reboot! 1965 used “p”, 1985 used “P”. Strominger et al. used 1411.5745 stuff (arxived!).

History track details: in 1962. Bondi, Van der Burg, Mertzner and Sachs (the latter independently), realized that DIFFERENT observers at constant speed would disagree on the limit GR\rightarrow SR. Wait, wait,…How? Yes, you are reading it well. Different observers can observe different SR limits at constant speed when you go far away from the source. Therefore, the limit of flat spacetime from curved spacetime is not straightforward at all! The usual mantra from textbooks that you get special relativity locally from general relativity is subtle. GR textbooks say that GR reduces to SR in the weak field approximation, when you go far away from the source and locally your spacetime is “flat”. However, the floppy spacetime continuum makes this statement vague. Indeed, it is not accurate enough. Spacetime continuum is indeed a fluid-like or crystal and rigid thing. We would naively expect that what happens in our solar system is independent of the rest of the galaxy or the local group! Gravity diminishes at large distances, and planets or stars, therefore, are independent one to another and from distant enough objects! Well, it seems this is an oversimplification. Compère and collaborators, focusing on the crystal analogy, effectively “prove” that spacetime itself acts and looks like the same thing when shifted from one position to another, but…Studying Bondi et al. previous works, it shows that the limit “far far away” r\rightarrow\infty do NOT spoils gravity and the gravitational force. Neither you get SR but an infinite number of extra dimensional symmetries and degrees of freedom appear! Spacetime remains floppy even at very large distances! It seems you can not spoil gravity completely. Now, one could protest…What happens to the equivalence principle here? That is quite a question. Since, even when “no gravity”, there is “gravity” left behind, a residual gravitational force is out there even when you can not see it or feel it. This residue remains and it is spooky. It has terrible consequences (please, don’t tell astrologists or futurologists about this, …PLEEEEASEE). Distant planets and stars, distant galaxies are not completely independently of one another after all. The Force is there yet. Furthermore, GR is NOT the same as SR even at long distances from the source (the difference would be unnoticed by graduate studentes, I presume, unless they read about it, or they were informed on this point). What is this influence? What is left unspoiled when far enough from the source? The magic word that gives name to this blog entry: supertranslations (please, don’t confuse them with supertraslations from supergroup and supersymmetry). 

What are supertranslations? Well, something easy to say, not easy to live with without care. Supertranslations are certain coordinate transformations given by ANGLE-dependent translations relating points at “infinity” far from a gravitating body. In other words, they are infinite dimensional symmetry transformations relating points at different angles from very far away sources. They form a group called BMS group. The mathematical code of BMS group implies that empty spacetime have a large complexity. From this viewpoint, for empty spacetime there are infinite ways to be empty. Of course, it seems nonsense but it has surprising consequences. That vacuum and vacuum spacetime have such vast complexity…Well, it is amazing. Vacuum is a complex thing after all. Every big question in the 21st century in theoretical physics is related to vacuum! The BMS group has another surprise, beyond supertranslations, you also have superrotations. Superrotations are generalized rotations (even rotational boosts get generalized) that takes the form of conformal complex transformations, they are related to the conformal rescaling of the metric, and, it turns (and I am sure this turns already stringers horny), that the superrotation charges form a Viraroso algebra or group. Supertranslations charges are supermomenta, and these are conserved due to the Noether theorem. Superrotations charges are much more subtle, they are superangular momenta. The BMS groups in 3 dimensions or higher have supertranslations and superrotations, but superrotations are bad beasts. It shows that they could you allow to create cosmic strings at the far extreme points of the Universe.

Time machine again. It is the 1930s of the 20th century. Felix Bloch and Nordsieck calculated that if you collide two soft photons at zero energy, the probability of the given outcome is independent of the number of the particles you produce and extra details. The same situation, it can be showed to happen if you collide two soft gravitons. In general, soft photons or gravitons are added due to supertranslation asymptotic symmetries! Apparently empty spacetimes should gravitate as consequence of these residual asymptotic symmetries of gravitational forces. Add soft particles to the vacuum does not change the physics, but it does contribute to global angular momentum and momentum. This vacuum degeneracy is striking. But it is not uncommon. Asymptotic symmetries have been discovered also in gauge theories. There, they have a much more big name: they are called “large gauge transformations”. In other words, large gauge transformations are asymptotic symmetries in gauge theories that don’t allow you to spoil the field completely. Vacuum is not “unitque”, it is defined modulo asymptotic symmetrics/large gauge transformations. Moreover, you can get even get asymptotic symmetries from other particles too, not only photons or gravitons. Strominger, in fact, has provided a new interpretation of supertranslations. Supertranslations ADD (soft) particles to (vacuum) spacetime. Are BMS charges real or a mathematical artifact? What are the effect of the BMS charges on the equivalence principle? Well, these questions are hard to answer, but there is another part of the whole story coming.

Circa 1970s. Zel’dovich y Polnarev discover the memory effect. Gravitational waves cause oscillations of masses and particles BUT also cause other stunning effect. Gravitational waves produce a permanent shift in position or displacement of any object that is crossed by them! They also cause a permanent time shift. That means, for instance, that mirrors of LIGO and other gravitational wave experiments do NOT return to the original position. This is a very tiny effect though. This effect is very similar to the dislocation effect on a crystal after a phonon disturbaces the atoms in the lattice. The gravitational wave here acts as the dislocation. Compère has remarked this effect. Passage of gravitational waves is like dislocation in crystals, and it produces a permanent displacement we could measure in the future with gravitational astronomy. If you want to look how memory effects do arise in ohter forces, see e.g., http.//arxiv.org/1805.12224 and references therein. Strominger, indeed, has envisioned a matrix solution to the BHIP (black hole information paradox) using the asymptotic group symmetry. It was joined by Hawking months before his death. However, there are puzzles with respect to this solution unsolved, and their proposal is yet not a convincing solution. The good thing is that we are approaching a new symmetry principle, beyond the diffeomorphism or Poincaré symmetry group. A new symmetry and relativity principle is necessary in order to manage a true unification of gravity and quantum mechanics. The dislocation effect of gravitational or gauge theory memories is that two observers, shift a constant distance after the gravitational (or the interaction wave) passes. Memory effects for electromagnetic and gauge forces are universal. Perry, Hawking and Strominger thought a solution of BHIP with these symmetries. Yet, how spacetime emerges from more fundamental symmetries like these BMS is not an easy task. Even worst, the hair or the extra charges introduced by supertranslations and superrotations could be not enough for saving unitarity!

In summary:

  • Asymptotic symmetries or large gauge symmetries are symmetry transformations that “die” at the far infinity from the source. Do you remember that High School boundary condition telling you that the potential at infinity is ZERO? Well, it is not just “zero”. Something lives there at infinity…Something is left. You can add any number of zero energy soft particles far away. But, wait, what is a zero energy particle?
  • Noether theorem applied to supertranslations and superrotations give you conserved hair or charges (an infinite number of them inded) called supermomenta and superangular momenta. Supermomenta can be thought as angular-dependend translations at the infinity. Superangular momenta are more abstract, they are just “conformal” transformations associated to Viraroso algebras. Superangular momentum charges can be imagined as the charges leaving invariant diffeomorphically a circle. That is, superrotations are just reparametrizations of the circle in abstract complex spaces.
  • Asymptotic symmetries are a step forward the new symmetry principle behind quantum gravity and the TOE. They are not clear enough to solve, yet, the BHIP, but they hint towards a new symmetry as well.
  • It is not true that taking the limit of infinite far distance, you spoil gravity or any other gauge force and you get mere special relativity or even the mere galilean relativity. Asymptotic symmetry changes this naive assumption. What is the destiny of the equivalence principle in this setting? Well, perhaps, the vacuum itself can not be defined excepting MODULO large gauge transformations/generalized asymptotic symmetries. The quantum rest/zero force/zero jerk/zero/pop/zero crackle/zero absement is meaningless/pointless/futile. You can always add zero energy particles to your vacuum. Recall the Unruh effect here. Is asymptotic symmetry telling us that the different accelerated observers could agree on their vacuum modulo soft particles?
  • Memory effects. Gravitational waves disturb permantly any object they pass. This constant shift is called gravitational wave memory. It could be measured in the future. Similar memory effects do exist in other gauge theories.
  • Interplay of certain Holy Trinity:

    \[\boxed{\mbox{Symmetry}\leftrightarrow \mbox{Soft particles}\leftrightarrow\mbox{Memory effects}}\]

Have you been supertranslated and superrotated? Probably yes, any gravitational wave crossing through your body permanently shifts your atoms and particles! Fortunately, it is a tiny effect. Hopefully, we will measure it in future gravitational wave telescopes! Disturbingly, any gauge force does the same and it can not be spoiled away modulo soft particles at your position and time. Even worst, OR NOT, any far away source of the Universe is linked to you via supertranslations and superrotations. Don’t tell any pseudoscientist abou it. Instead of blowing up their minds, they will try to publish and get money with that information. Mmm…Should I make money with supertranslations and superrotations? Just joking, see you in another quantum blog post!!!!!!

P.S.: Motto is that we were lied! Potential at infinity does NOT spoil the force (potential) to zero. Zero-point measurements are likely impossible in quantum fluctuating theories since there is not such a thing like a quantum rest. The quantum aether is “dead” as static thing. Is the cosmological constant a residual effect from the quantum unified field theory?