LOG#239. Higgspecial.

The 2012 found Higgs particle is special. The next generations of physicists and scientists, will likely build larger machines or colliders to study it precisely. The question is, of course, where is new physics, i.e., where is the new energy physics scale? Is it the Planck scale? Is it lower than 10^{28}eV?

What is a particle?

Particles are field excitations. Fields satisfy wave equations. Thus particles, as representations of fields, also verify field or wave equations. Fields and particles have also symmetries. They are invariant under certain group transformations. There are several types of symmetry transformations:

  1. Spacetime symmetries or spacetime invariance. They include: translations, rotations, boosts (pure Lorentz transformations) and the full three type combination. The homogeneous Lorentz group does not include translations. The inhomogeneous Lorentz group includes translations and it is called Poincaré group. Generally speaking, spacetime symmetries are local spacetime transformations only.
  2. Internal (gauge) symmetries. These transformations are transformations of the fields up to a phase factor at the same spacetime-point. They can be global and local.
  3. Supersymmetry. Transformations relating different statistics particles, i.e., relating bosons and fermions. It can be extended to higher spin under the names of hypersymmetry and hypersupersymmetry. It can also be extended to N-graded supermanifolds.

We say a transformation is global when the group parameter does not depend on the base space (generally spacetime). We say a transformation is local when it depends of functions defined on the base space.

Quantum mechanics is just a theory relating “numbers” to each other. Particles or fields are defined as functions on the spacetime momentum (continuum in general) and certain discrete set of numbers (quantum numbers all of them)


and thus

    \[U(\Lambda)\ket{p,\sigma}=D_{\sigma\sigma'}\ket{\Lambda p,\sigma'}\]

represents quantum particles/waves as certain unitary representations of the Poincaré group (spacetime)! Superfields generalize this thing. Diffferent particles or fields are certain unitary representations of the superPoincaré group (superspacetime)! Equivalently, particles are invariant states under group or supergroup transformations. Particle physics is the study of fundamental laws of Nature governed by (yet hidden and mysterious) the fusion of quantum mechanics rules and spacetime rules.

From the 17th century to 20th century: we had a march of reductionism and symmetries. Whatever the ultimate theory is, relativity plus QM (Quantum Mechanics) are got as approximations at low or relatively low (100GeV) energies. Reductionism works: massless particles interact as an Greek Y (upsilon) vertices.


Massless particles can be easily described by twistors, certain bispinors (couples of spinors):

    \[p_{\alpha\dot{\alpha}}=\begin{pmatrix} p_0+p_3 & p_1-ip_2\\ p_1+ip_2 & p_0-p_3\end{pmatrix}=\lambda_{\alpha}\overline{\lambda}_{\dot{\alpha}}\]

Indeed, interactions are believed to be effectively described by parallel twistor-like variables \lambda_A\propto \lambda_B\propto \lambda_C and \overline{\lambda_A}\propto\overline{ \lambda_B}\propto \overline{\lambda_C}. The Poincaré group completely fixes the way in which particles interact between each other. For instante, the 4-particle scattering constraints

    \[(\langle 1 2 \rangle \left[3 4 \right])^{2S}F(s,t,u)\]

where s is the spin of the particle. Be aware of not confusing the spin with the Mandelstam variable s. Locality implies the factorization of the 4-particle amplitude into two Y pieces, such as

    \[F(s,t,u)=\begin{cases}\dfrac{g^2}{s t^S}\\ \dfrac{g^2}{t u^S}\\ \dfrac{ g^2}{u s^S}\end{cases}\]

Two special cases are S=0 (the Higss!) and S=2 (the graviton!):



where the latter represents the 2×2 graviton scattering. For spin S=1 you have

    \[Y\propto gf^{abc}\dfrac{\langle 1 2 \rangle^3}{\langle{13}\rangle\langle 23\rangle}\]

Interactions between both, massive and massless spin one particles must contain spin zero particles in order to unitarize the scattering amplitudes! Scalar bosons are Higgs bosons. Of course, at very high energies, the Higgs and the chiral components of the massive gauge bosons (spin one) are all unified into a single electroweak interaction. A belief in these principles has a paid-off: particles have only spin 0,1/2,1,3/2,2,… The 21st century revelations must include some additional pieces of information about this stuff:

  • The doom or end of spacetime. Is the end of reductionism at sight?
  • Why the Universe is big?
  • New ideas required beyond spacetime and internal symmetries. The missing link is usually called supersymmetry (SUSY), certain odd symmetry transformations relating boson and fermions. New dogma.
  • UV/IR entanglement/link/connection. At energies bigger than Planck energy, it seems physics classicalize. We have black holes with large sizes, and thus energies (in rest) larger than Planck energy. High energy is short distance UV physics. Low energy is large distance IR physics.
  • Reductionism plus wilsonian effective field theory approaches plus paradigmatic model is false. Fundamental theories or laws of Nature nothing like condensed matter physics (even when condensed matter systems are useful analogues!). Far deeper and more radical ideas are necessary. Only at Planck scale?

Photons must stay massless for consistent Quantum Electrodynamics, so they are Higgs transparent. 2\neq 3 is the Nima statement on this thing. massless helicities are not the same of massive helicities. This fact is essential to gauge fields and chiral fermions. So they can be easily engineered in condensed matter physics. However, Higgs fields are strange to condensed matter systems. Higgs is special because it does NOT naturally arise in superconductor physics and other condensed matter fields. Why the Higgs mass is low compared to the Planck mass? That is the riddle. The enigma. Higgs particles naturally receive quantum corrections to mass from boson and fermion particles. The cosmological constant problem is beyond a Higgs-like explanation because the Higgs field energy is too-large to handle it. Of course, there are some ideas about how to fix it, but they are cumbersome and very complicated. We need to go beyond standard symmetries. And even so, puzzles appear. Flat spacetimes, de Sitter (dS) spacetimes or Anti de Sitter (AdS) spacetimes? They have some amount of extra symmetries: SO(5,1)\rightarrow \mbox{Poincaré}\rightarrow SO(4,2). The cases of \Lambda>0 (dS), \Lambda=0 (flat spacetime), \Lambda<0 (AdS). Recently, we found (not easily) dS vacua in string and superstring theories. But CFT/AdS correspondences are better understood yet. We are missing something huge about QM of the relativistic vacuum in order to understand the macroscopic Universe we observe/live in.

Why is the Higgs discovery so important? Our relativistic vacuum is qualitatively different than anything we are seen (dark matter, dark energy,…) in ordinary physics. Not just at Planck scale! Already at GeV and TeV scale we face problems! The Higgs plus nothing else at low energies means that something is wrong or at least not completely correct. The Higgs is the most important character in this dramatic story of dark stuff. We can put it under the most incisive and precise experimental testing! So, we need either better colliders, or better dark matter/dark energy observations. The Higgs is new physics from this viewpoint:

  1. We have never seen scalar (fundamental and structureless?) fields before.
  2. Harbinger of deep and hidden new principles/sectors at work at the quantum realm.
  3. We must study it closely.

It could arise that Higgs particles are composite particles. How pointlike are Higgs particles? Higgs particles could be really composite of some superstrong pion-like stuff. But also, they could be truly fundamental. A Higgs factory working at 125 GeV (pole mass) of the Higgs should serve to see if the Higgs is point-like (fundamental). Furthermore, we have never seen self-interacting scalar fields before. A 100 TeV collider or larger could measure Higgs self-coupling up to 5%. The Higgs is similar to gravity there: the Higgs self-interacts much like gravitons!

Yang-Mills fields (YM) plus gravity changes helicity of particles AND color. 100 TeV colliders blast interactions and push High Energy Physics. New particles masses up to 10 times the masses accessible to the LHC would ve available. They would probe vacuum fluctuations with 100 times the LHC power. The challenge is hard for experimentalists. Meanwhile, the theorists must go far from known theories and go into theory of the mysterious cosmological constant and the Higgs scalar field. The macrouniverse versus the microuniverse is at hand. On-shell lorentzian couplings rival off-shell euclidean couplings of the Higgs? Standard local QFT in euclidean spacetimes are related to lorentzian fields. UV/IR changes this view! Something must be changed or challenged!

Toy example

Suppose F=1/t. By analytic continuation,

    \[\dfrac{1}{2\pi i}\oint dt F(t)=1\]

Is Effective Field Theory implying 0 Higgs and that unnatural value? Wrong! Take for instance



    \[\oint dt F(t)=0!\]

This mechanism for removing bulk signs works in AdS/CFT correspondences and alike theories. For \Lambda=0 we need something similar to remove singularities and test it! For instance, UV-IR tuning could provide sensitivities to loop processes arising in the EFT of the Higgs potential

    \[V(1-loop)=\lambda^4h^4\log( \lambda^2+k^2)+(M\pm h)^4\log (M\pm k)^2=\sum M^4\log M^2(k)\]

However, why should we tree cancel the 1-loop correction? It contains UV states and \lambda^2 M^2 h^2 terms. Tree amplitudes are rational amplitudes. Loop amplitudes are trascendental amplitudes! But long known funny things in QFT computations do happen. For instance,

    \[\Gamma(positronium)=\mbox{something}\times (M^2-9)\]

Well, this not happens. There is a hidden mechanism here, known from Feynman’s books. Rational approximations to trascendental numbers are aslo known from old mathematics! A High School student knows

    \[\ln 2=\int_0^1 \dfrac{dx}{1+x}\]

This is trascendental because of the single pole at x=-1. If you take instead

    \[I(x)=\int \dfrac{dx P(x)}{1+x}=P(-1)\log 2+\mbox{Rational part}\]

you get an apparen tuning of rational to trascendental numbers

    \[\int_0^1\dfrac{dx}{1+x}\left(\dfrac{x(1-x)}{2}\right)^N=\pm \log 2+\mbox{Rational part}\]

and thus, e.g., if N=5, you get a tiny difference to \log 2 by a factor of 10^{-5} (the difference up to this precision is 2329/3360). The same idea works if you take

    \[4\int_0^1\dfrac{dx}{1+x^2}\left(\dfrac{x(1-x)}{4})\right)^{4N}=\pi +\mbox{Rational number}\]

You get for N=1 \pi-22/7\sim 10^{-3} and \pi-47171/15015\sim 10^{-6} if N=2. Thus, we could conjecture a fantasy: there is a dual formulation of standard physics that represents physical amplitudes and observeables in terms of integral over abstract geometries (motives? schemes? a generalized amplituhedron seen in super YM?). In this formulation, the discrepancy between the cosmological constant scale and the Higgs mass is solved and it is obviously small. But it can not be obviously local physics. Another formulation separates different number theoretical parts plus it looks like local physics though! However, it will be fine-tuned like the integrals above! In the end, something could look like

    \[V(h)=\int dk^2 F(k^2)=\mbox{Logs+rational}=\mbox{exponentially small}\]

Fine-tuning could, thus, be only an apparent artifact of local field theory!

A final concrete example:

    \[F(h)=\int_k^1 \dfrac{dx (x-h)^4}{1+x}\left(\dfrac{x(1-x)}{2}\right)^N\]

Take V(h)=F(h)+F(-h). Then,

    \[V(h)=\sum_{\pm}(1\pm h)^4\log (1\pm h)+\mbox{Rational parts}\]

And it guarantees to be fine-tuning! This should have critical tests in a Higgs factory or very large LHC and/or 100TeV colliders of above. In the example above, if N=5


with no sixth power or eight power terms. Precision circular electron-positron colliders could handle with this physics. Signals from tunning mechanisms could be searched. It is not just m_h^2 terms only. High dimensional operators and corrections to the Higgs potential (the vacuum structure itself!) could be studied. But also, we could search for new fields or tiny effects we can not do within the LHC.

Summary: scientific issues today are deeper than those of 1930s or even 1900s. Questions raised by the accelerated universe and Higgs discovery go at the heart of the Nature of the spacetime and the vacuum structure of our Universe!

What about symmetries? In the lagrangian (action) approach, you get a symmetry variation (quasiinvariance) of the lagrangian as follows

    \[\delta_s L=\partial_\mu \left[\left(\dfrac{\partial L}{\partial(\partial_\mu\phi)}\right)\delta_s \phi\right]+\left[\dfrac{\partial L}{\partial\phi}-\partial_\mu\left(\dfrac{\partial L}{\partial(\partial_\mu\phi)}\right)\right]\delta_s\phi\]

Then, by the first Noether theorem, imposing that the action is extremal (generally minimal), and the Euler-Lagrange equations (equations of motion), you would get

    \[E(L)=0, \mbox{Plus quasiinvariance}\;\; \delta_s L=\partial_\mu K^{\mu}\]

a conserved current (and a charged after integration on the suitable measure):

    \[\partial_\mu J^\mu=0=\partial_\mu\left[\dfrac{\partial L}{\partial(\partial_\mu \phi)}\delta_s \phi -K^\mu\right]=\varepsilon \partial_\mu J^\mu\]

such as

    \[J^\mu=\dfrac{\partial L}{\partial( \partial_\mu \phi)}\Delta_s\phi-\dfrac{K^\mu}{\varepsilon}\]

where \Delta_s\phi= \delta_s\phi/\varepsilon.

This theorem can be generealized for higher order lagrangians, in any number of dimension, and even for fractional and differentigral operators. Furthermore, a second Noether theorem handles ambiguities in this theorem, stating that gauge local transformations imply certain relations or identities between the field equations (sometimes referred as Bianchi identities but go further these classical identities). You can go further with differential forms, exterior calculus or even with Clifford geometric calculus. A p-form

    \[A_p=\dfrac{1}{p!}A_{\mu_1\cdots\mu_p} dx^{\mu_1}\wedge\cdots \wedge dx^{\mu_p}\equiv A_{\mu_1\cdots\mu_p} dx^{\mu_1}\wedge\cdots \wedge dx^{\mu_p}\]

defines p-dimensional objects that can be naturally integrated out. For a p-tube in D-dimensions

    \[\tau_{\mu_{p+1}\cdots \mu_D}=\dfrac{1}{p!}\int_C\varepsilon_{\mu_1\cdots\mu_p\mu_{p+1}\cdots\mu_D}\delta(x-y) dy^{\mu_1}\wedge\cdots dy^{\mu_p}\]

On p-forms, the Hodge star operator for a p-form A_p in D-dimensions turn it into a (D-p)-form

    \[\star A=\dfrac{\sqrt{\vert g\vert}}{p!(D-p)!}A_{\mu_1\cdots\mu_p}\varepsilon^{\mu_1\cdots\mu_p} dx^{\nu_{p+1}}\wedge\cdots\wedge dx^{\nu_D}\]

As D=Dim(X), then we have \star^2=\star\star=(-1)^{p(D-p)+q}, where q=1 if the metric is Lorentzian, q=0 for euclidean metrics and q=T, the number of time-like dimensions, if the metric is ultrahyperbolic. Moreover,

    \[vol=\star 1=\dfrac{\sqrt{\vert g\vert}}{D!}\varepsilon_{\mu_1\cdots \mu_{D}}dx^{\mu_1}\wedge\cdots\wedge dx^{\mu_D}=\sqrt{\vert g\vert} dx^{\mu_1}\wedge\cdots \wedge dx^{\mu_D}\]

For \star:\Omega^p\rightarrow \Omega^{D-p} maps, you can also write

    \[\langle A,B\rangle=\int A\wedge \star B=\int B\wedge \star A \]

    \[\int A\wedge B=\int \star A\wedge \star B\]

where the latter is generally valid up to a sign. The Hodge laplacian reads


and you also gets

    \[\langle A, dB\rangle=\langle d^+A,B\rangle\]

If \partial X is not zero (the boundary is not null), then it implies essentially Dirichlet or Neumann boundary conditions for d, d^+. When you apply the adjoint operator d^+ on p-forms you get

    \[d^{+}=(-1)^{Dp+D+1}\star d\star\]

in general but you pick up an extra -sign in euclidean signatures.

To end this eclectic post, first a new twist on Weak Gravity Conjectures(WGC). Why the electron charge is so small in certain units? That is, e<m_e. Take Coulomb and Newton laws




    \[\dfrac{F_N}{F_C}\sim 10^{-42}\]

Planck mass is

    \[M_P=\sqrt{\dfrac{\hbar b}{G_N}}\]

and then

    \[Q_P=\sqrt{4\pi\varepsilon_0\hbar c}=\left(\dfrac{\hbar c}{K_C}\right)^{1/2}\]

Planckian entities satisfy instead F_G/F_C=1! then, the enigma is why

    \[\dfrac{m_e}{M_p}<10^{-22}<<\dfrac{q_e}{q_P}\sim 0.1=10^{-1}\]

In other words, q_e/m_e\approx 10^{21} in relativistic natural units with c=\hbar=4\pi\varepsilon_0=1 with \hbar\neq 1. The WGC states that the lightest charge particle with m,q in ANY U(1) (abelian gauge) theory admits UV embedding into a consistent quantum gravity theory only and only if

    \[\dfrac{qg}{\sqrt{\hbar}}\geq \dfrac{m}{M_p}\]

where g is the gauge coupling and QED satisfies WGC since ge/\sqrt{\hbar}\sim 10^{-3}>>10^{-22}. WGC ensures that extremal black holes are unstable and decay into non-extremal black holes rapidly (if even formed!) via processes (and avoid to be extremal too) that are QG consistent. Furhtermore, WGC could imply the 3rd thermodynamical law easily. For Reissner-Nordström black holes

    \[T=\dfrac{\hbar \sqrt{M^2-Q^2}}{2\pi\left(M+\sqrt{M^2-Q^2}\right)^2}\]

and a grey body correction to black hole arises from this too

    \[\langle N_{j\omega l p}\rangle=\dfrac{\Gamma(j\omega l p)}{e^{(\omega-e\phi)/T}\pm 1}\]

Generalised Uncertainty principles (GUP) plus Chandrasekhar limits enjoy similarities with the WGC:

    \[M_C\sim \dfrac{1}{m_e^2}\left(\dfrac{\hbar c}{G}\right)^{3/2}\simeq 1.4M_\odots\]

The S-matrix

    \[\braket{\Psi(+\infty) |\Psi(+\infty)}=1=\bra{\Psi(- \infty)} S^+S\ket{\Psi(-\infty)}=\braket{\Psi(-\infty) |\Psi(-\infty)}\]

and by time reversal, the principle of deteailed balance holds so

    \[\braket{\Psi(-\infty) |\Psi(+\infty)}=1=\bra{\Psi(+ \infty)} S^+S\ket{\Psi(-\infty)}=\braket{\Psi(+\infty) |\Psi(-\infty)}\]

Quantum determinism implies via unitarity \Psi'=U\Psi. However, Nature could surprise us. And that would affect Chandrasekhar masses or TOV limits. Stellar evolucion implies luminosities L=4\pi R^2 \sigma T_e^4, where T_e is the effective temperature for black bodies (Planck law), and \sigma is the Stefan-Boltzmann constant

    \[\sigma=5.67\cdot 10^{-5}\dfrac{erg}{cm^2 K^4 s}\]

Maximal energy for a set of baryons under gravity is

    \[E_G=-\dfrac{GMm_B}{R}=-G\dfrac{Nm_B^2}{R}=\hbar c\dfrac{N^{1/3}}{R}-G\dfrac{Nm_B^2}{R}\]

as the baryon number for a star is

    \[N_B=\left(\dfrac{\hbar c}{Gm_B}\right)^{3/2}\simeq 2\cdot 10^{57}\]

Using the Wien law

    \[\lambda_{max} T_e\simeq 2.9\cdot 10^6 mm\cdot K\]

the stars locally have an equation for baryon density about

    \[\left(\dfrac{V}{N}\right)^{1/3}=n^{1/3}=\dfrac{n^{1/3}}{M_P^{1/3}}\sim \left(\dfrac{4}{3} \pi R_\odot^3\dfrac{m_P}{M_{ \odot}}\right)^{1/3}\sim 10^{-8}\]

Stars are sustained by gas and radiation against gravitational collapse. The star pressure

    \[P_\star=P(gas)+P(radiation)=\dfrac{K}{\mu}\rho T+\dfrac{1}{3}n T^4\]

Thus, the maximal mass for a white dwarf star made by barions is about 1.5M_\odot. Ideal gas law implies the HR diagram!!!! Luminosity scales as the cube of mass. The Eddington limit maximal luminosity for any star reads off as

    \[L_E=\dfrac{AMc Gm(r)}{K}\]

and a Buchdahl limit arises from this and the TOV limit as follows

    \[TOV\rightarrow \dfrac{GM}{c^2R}<\dfrac{4}{9}\]

and then


implies a BH as inevitable consequence iff M>3M_\odot approximately!

Epilogue: heterodynes or superheterodynes? Jansky? dB scales? Photoelectric effect is compatible with multiphoton processes and special relativity too. SR has formulae for Compton effect, inverse Compton effect, pair creations, pair annihilations, and strong field effects!



LOG#238. Cosmostuff.

Cosmology is facing again some troubles. Some estimations of the Hubble parameter H_0 differ up to four standard deviations from the accepted value. Even when a few km/s/Mpc are not quite a huge difference, it turns than they can provide anomalies if measured with enough precision.

I am going to review Cosmology today, as far as we know it. It is a quite dynamical field, so changes are expected in the next years with new telescopes, the JWST (James Webb Space Telescope), and other tools or ideas that make this subject so fascinating. After all, isn’t  the Universe as a whole the biggest place where we live? I am neglecting the Multiverse idea as untestable at this moment.

Cosmology can use the so-called standard candles and standard rulers to measure how fast the Universe is expanding. Two measures of distance are

(1)   \begin{equation*} D_L\rightarrow F=\dfrac{L}{4\pi D_L^2}\end{equation*}

(2)   \begin{equation*} D_A=\dfrac{R}{\theta}\end{equation*}

and they are called the luminosity distance and the angular distance. They use, respectively, the flux by some strong sources (e.g., SNIa) and the ability to measure the size of an object at long distances (of course, far, far away objects look like point sources, so angular measurement is only possible when having good resolution and/or close enough objects with respect to our instruments).

Considering luminosities, you can measure the bolometric magnitudes:

(3)   \begin{equation*} m-M=5\log \left(\dfrac{d}{10pc}\right)\end{equation*}


(4)   \begin{equation*} m_A-m_B=-2.5\log \left(\dfrac{f_A}{f_B}\right)\end{equation*}

Known from the beginning of the second third of the 20th century (Zwicky), we do know that galaxies are not bright enough to explain the galactic motions. The Tully-Fisher relation

(5)   \begin{equation*}L\propto v^4\end{equation*}

and the Faber-Jackson relation

(6)   \begin{equation*} L\propto \sigma (v)^4\end{equation*}

are the celebrated luminosities of spiral galaxies and elliptical galaxies, respectively. They hint there are dark matter out there (or equivalently, a component of gravity we do not see with the electromagnetic fluxes).

There are two faint objects we are not sure yet of how many of them are in the galaxies. White dwarfs and neutron stars (I give up the option of black holes and other exotic compact objects, ECOs, at this moment). There are quite strong and robust limitations to the white dwarf and neutron star masses due to nuclear physics. For the former we have the Chandrasekhar limit at about 1.44M_\odot, for the latter we have the Tolman-Oppenheimer-Volkoff limit, at about 2-3M_\odot. The uncertainties in the knowledge of the nuclear equation of state for neutron star are the responsible of the relative big mass spreading (1 solar mass). Gravitational waves are going to be the tool to measure the nuclear equation of state, very likely.

The Dark Energy Survey (DES) is a 15000 times 70, i.e., 1050000 dollar project of astronomical Cosmology. PTOLEMY is another future experiment. If there is “no” privileged distance, then we could measure some interesting distances. For instance, a recently new method is the baryonic acoustic oscillations (BAO). What the hell are the BAO? BAO are fluctuations or rapid changes in the density of the visible baryonic matter (normal matter) of the universe, caused by acoustic density waves (changes in the density of baryons, i.e., the proton oscillations at the distances where galaxies are correlated in distance in-and-out) in the primordial plasma of the early universe. The critical acoustic distance is about 150 Mpc or about 500 Mlyr. Then, the correlation function is a device that gives you the probability of two galaxies have to be separated certain distance d_S. Cosmology, also, measures the rate of expansion of the Universe via a modified Hubble law. Original Hubble law is


but the REAL and more precise Hubble law depends on the redshift z

(7)   \begin{equation*}v=H(z)d\end{equation*}

In the 21st century, we have more tools to measure what the Universe looks like beyond the electromagnetic visible spectrum. We can handle X-rays, gamma rays, radio waves, neutrinos (extragalactic from 2006 and SN 1987A are known) and gravitational waves (since 2015). Software analysis is diverse: tools like Sextractor or DESI are known in astronomy. General relativity is tested and established as comoslogical theory at the disguise of LCDM. Perfect cosmological principle has been abandonated. We solve the Olbers paradox giving a lifetime of 13.8 Gyrs to our Universe (e.g., see the LSST data and observations, or PLANCK probe ultimate data).  The non-perfect cosmological principle, saying that the Universe is isotropic and homogeneous at distances above 100 Mpc has been tested. GR equations hold

    \[G_{\mu\nu}=\dfrac{8\pi G}{c^4}T_{\mu\nu}\]


    \[G_{\mu\nu}=R_{\mu\nu}-\dfrac{1}{2}g_{\mu\nu} R+\Lambda g_{\mu\nu}\]

Pressure is a source of gravity, through the universal equation of state

    \[P=\omega \rho\]

There is a critical density to collapse (gravitationally) the Universe. It is given by

(8)   \begin{equation*}\rho_c=\dfrac{3H^2}{8\pi G}\end{equation*}


(9)   \begin{equation*}\rho_c(E)=\rho_c(m)c^2\dfrac{3H^2c^2}{8\pi G}\end{equation*}

Plugging the measured values, it has a value about 8\cdot 10^{-27}kg/m^3, or 10 hydrogen atoms (protons) per cubic meter. Also, you can rewrite that value as

    \[\rho_c(measured)\sim \dfrac{1.5\cdot 10^{11}M_\odot}{Mpc^3}\]

Then, knowing \rho_c, you can define the omega densities as the density of species X over the critical density (dimensionless):

    \[\Omega_X=\dfrac{\rho_X}{\rho_c}=\dfrac{8\pi G\rho_X}{3H^2}\]

The measured value of the cosmological constant energy (or mass) density, is close to the Zeldovich estimate of vacuum energy:

    \[\rho_\Lambda=\dfrac{Gm^6c^2}{\hbar^4}=\dfrac{\Lambda c^4}{8\pi G}\]

This relation has been derived by C. Beck from purely information theory arguments

    \[\rho_\Lambda (Beck)=\left(\dfrac{c}{\hbar}\right)^4\left(\dfrac{G}{8\pi}\right)\left(\dfrac{m}{\alpha}\right)^6\]

being m=m_e the electron mass and \alpha the electromagnetic fine structure constant. The Higgs vacuum energy density is much bigger than the measured vacuum energy density, it is about v=(\sqrt{2})^{-1/2}\approx 246 GeV, and that is a problem we can not completely understand yet. If so, the Universe should have collapsed. And that is not the case.

In cosmology, theoretical cosmology, we have to measure the redshift in order to know the scale factor of the expansion. They are linked with the equation


The Friedmann equations are only the Einstein Field Equations for a homogeneous and isotropic Universe (described by a metric and a perfect fluid):

(10)   \begin{equation*}\left(\dfrac{\ddot{a}}{a}\right)=-\dfrac{4\pi G}{3}\left(\rho +\dfrac{3P}{c^2}\right)+\dfrac{\Lambda c^2}{3}\end{equation*}

(11)   \begin{equation*}\left(\dfrac{\dot{a}}{a}\right)^2=\dfrac{8\pi G\rho}{3}-\dfrac{\kappa c^2}{a^2}+\dfrac{\Lambda c^2}{3}\end{equation*}

(12)   \begin{equation*}T_{\mu\nu}=\left(\rho+\dfrac{P}{c^2}\right)u_\mu u_\nu+Pg_{\mu\nu}\end{equation*}

Conservation of energy at cosmological scales provides


For matter, dust, we have \omega=0 and pressureless composition P=0. For radiation, we have P=\rho/3 and \omega=1/3. For dark energy, the cosmological constant Einsteind thought was his biggest mistake, we have \omega=-1 and P=-\rho. If -1<\omega<1/3 we have the so-called quintessence field. A dynamical field that resembles scalar field but it can also be other classes of field. Quintessence and scalar fields are generally speaking a prediction of superstring theories and other unification models. If \omega<-1 we have the so-called phantom energy, a field that could destroy literally the Universe and provide a future singularity. It is the Big Rip (or little Rip in some lite models). There is a relationship between the \omega parameter, the scale factor and the density:

(13)   \begin{equation*}\rho\propto a^{-3(1+\omega)}\end{equation*}

Thus, for dust, \rho\propto a^{-3}, for radiation \rho\propto a^{-4} and \rho\propto constant for dark energy (cosmological constant is constant in density, due to its name of course). Cosmology, at the observational level, tries to fit data to the theory of LCDM via a global fit


of some parameters like \Omega_X, H_0, and other more specific parameters. We can measure the distance





The effect of the cosmic expansion is not a Doppler shift. It just enlarges wavelengths. This is the cosmological redshift. It is not a Doppler shift as one usually thinks. Indeed, we have observed objects with redshift above 1, so the relationship v\sim cz=HD=\dfrac{\dot{a}}{a}d implies that \dot{a}=a H=cz>1 if z>1. Of course, this not implies relativity is wrong. It is only the apparent effect of expansion.

That the Universe suffered a hot dense primordial phase, the Big Bang, is already proved beyond doubts. The COBE satellite and further CMB (cosmic microwave background) probes have even probed that the young Universe was very close to a perfect blackbody body. Indeed, COBE itself proved that at the level of 400\sigma. We can now measure the fluctuations of the blackbody CMB temperature in the sky according to some direction. It fits a perfect blackbody with some anisotropies. The power spectrum is the name of the Fourier transform of the temperatura in the sky map. It is measured at the level of some \mu K, microkelvins. The first peak in that power spectrum fits that our Universe is, surprisingly, flat, since our Universe is close to the critical energy density. \rho_U\sim\rho_c implies, thus, \kappa\sim 0. Please, do not tell that to flatlanders! The second peak gives us that there is dark matter out there. There are other hints about that. For instance, the peculiar velocity of galaxies (measured with real Doppler shifts) or the rotational curves of spiral galaxies and the dispersion speed of elliptical galaxies. We expect a cosmic neutrino background at the level of

    \[T_\nu=\left(\dfrac{4}{11}\right)^{1/3}T_{CMB}\simeq 1.945 K\]

and a relic graviton background at the level of


If we plug g_s=3.91 as the number of relativistic particle species today, and g_s=106.75 as the same number evaluated at the Planck time decoupling (that can be only estimated with a GUT/TOE exactly, but we can extrapolate roughly the Standard Model up to that energy to get a bound), you would get a 1K relic graviton background as upper bound (it can be lower if the number of particle species grows at higher energies), so the 1 kelvin degree is a good robust bound. Other theories would lower that value. Extra dimensional theories would also reduce the RGB by modifying the power law of the ratio of the number of particle species.

What else? Dark matter. WIMPs are being testing as candidates since decades ago. They have failed. The naive cross section of Dark Matter would be typically \sigma_{DM}\sim G_FM_W\sim 10^{-36} cm^2. After 25 years of experimental searches, we do know that \sigma_{DM}\leq 10^{-46}cm^2, provided it exists of course. WIMPS would have freezed at


and annihilation happens with rates \Gamma=n\sigma v. Relativistic dark matter are constrained by the number of known light neutrino species, not being radiation! No experiment has detected dark matter particles yet. Anyway, it could be anything between about 80 magnitude orders.

Finally, have you ever wondered how many eras in the cosmic evolution have the Universe faced? Let me list them:

  1. Definition 1 (Luminosity distance). F=\dfrac{L}{4\pi D_L^2}.
  1. The Planck era: t\leq 10^{-43}s. Quantumg gravity is necessary here, and likely the next two or three eras.
  2. The inflation era: 10^{-43}s\leq t\leq 10^{-35}s. Inflation, exponential expanding Universe, was suffered her. Otherwise, the flatness problem and the observed structure could be impossible. B-modes in the CMB would be a proof of inflation. After BICEP2 blunder, we have to wait for further evidences.
  3. Leptogenesis and baryogenesis. Where do quark and leptons come from? Why and how were they formed? We do not know, but this era is a future target of some experiments and theoretical studies. If neutrinos were Majorana particles or Majorana states for the neutrino species (right-handed!) wher found, we could explain, in principle, the unbalance between matter and antimatter we observe today.
  4. Electroweak and QCD phase transitions. In the edge of our current knowledge thanks to ALICE and quark-gluon plasma studies and the discovery of the Higgs field permeating the Universe. Further studies of this era will require gravitational waves and neutrinos, since no photon can show us information of this a previous dark eras.
  5. Neutrino decoupling. At about 1 second from the Big Bang, neutrino decouples from the primordial plasma.
  6. Electron-positron annihilation. After neutrino decoupled, electron and positros begin annihilating.
  7. Big Bang nucleosynthesis. The Universe is cooling and at some point, first nuclei (hydrogen, helium and lithium) confine. The point where matter and radiation pressures equalize is found. It happens at about 3 minutes. The cosmic temperature is about 3000 K, and it is lower than the 158000 K of the 13.6 eV ground state of the hydrogen atom.
  8. Recombination. The Universe expands and cools further. After 380000 years, electrons bound to the first atoms (H, He and Li). Big Bang theory predict the relative abundances of the main isotopes. The first stars (Population III) will form in the next millions of years, breeding the Universe with black hole seeds and galaxies after billions of years (Gyrs).
  9. At some point, a few Gyrs ago, dark energy begins dominate the cosmic expansion, previously dominated by matter. Thus, from radiation dominated expansion, we passed a several Gyr era of matter dominated expansion until we end in this dark energy era. Is emergent life related to dark energy dominating the cosmic expansion? Probably not, but it is a striking coincidence.

See you in the next blog post!

Definition 2 (Angular distance). D_A=\dfrac{R}{\theta}.

Definition 3 (Friedmann equations).

    \[\left(\dfrac{\ddot{a}}{a}\right)=-\dfrac{4\pi G}{3}\left(\rho +\dfrac{3P}{c^2}\right)+\dfrac{\Lambda c^2}{3}\]

    \[\left(\dfrac{\dot{a}}{a}\right)^2=\dfrac{8\pi G\rho}{3}-\dfrac{\kappa c^2}{a^2}+\dfrac{\Lambda c^2}{3} \]

Definition 4 (Perfect fluid energy-momentum tensor). T_{\mu\nu}=\left(\rho+\dfrac{P}{c^2}\right)u_\mu u_\nu+Pg_{\mu\nu}.

Definition 5 (Cosmic neutrino and graviton backgrounds).

    \[T_\nu=\left(\dfrac{4}{11}\right)^{1/3}T_{CMB}\simeq 1.945 K\]


LOG#237. GW music.


The spectrum of gravitational waves!!!!! Purely gravitational wave music!

The blog post today will cover two topics from elementary viewpoints: falling into a non-rotating black hole and gravitational wave “music”, i.e., gravitational wave formulae! It is a hard equilibrium just to fall into the BH singularity and to be spaghettified, but you will be awarded with gravitational wave physics at the second act! Happy???

Non-rotating black holes are called Schwarzschild black holes, or Schwarzschild-Tangherlini, since the latter generalized the black hole metric into extra space-like dimensions in 1963. I will keep mathematics as simple as possible, but that will introduce some imprecisions that, I wish, experts in the field will forgive.

A classical particle owns a lagrangian


It implies that

    \[\dfrac{\partial L}{\partial \dot{t}}=-\dfrac{m^2 g_{tt}\cdot \dot{t}}{L}=\mbox{constant}=-E\]


(1)   \begin{eqnarray*} \dot{t}=\dfrac{ L E}{m^2g_{tt}}\\ \dfrac{ds^2}{d\tau^2}=-1\\ L=-m\rightarrow \dot{t}=-\dfrac{E^2}{mg_{tt}}\end{eqnarray*}

and then

    \[g_{tt}\dot{t}^2+g_{rr}\dot{r}^2=-1\rightarrow \dot{r}^2=-\left(1-\dfrac{2GM}{r}\right)+\dfrac{E^2}{m^2}\]

Firstly, let us consider a test particle at rest from an infinite distance from the black hole event horizon, with E=m, and thus set up


Note that the above result is essentially the classical escape velocity. Secondly, make a trip towards the singularity, starting from r=r_0 (at \tau=0). Supposing its distance from \tau=0 is r you get


and now proceed to integrate the above equation with the proper above mentioned limits

    \[\int_{r_0}^{R}r^{1/2}dr=-\sqrt{2GM}\int_0^\tau d\tau\]

Remark: until the singularity from the boundary event horizon radius, you get a distance r=R_S, until the initial point, you have a distance r=r_0, until the final point you have r=R from the initial point but r=r_0-R from the singularity at the center.

After integration, and reintroducing c, you will obtain that

(2)   \begin{equation*}\tau_{BH}^f=\dfrac{2}{3R_S^{1/2}c}\left(r_0^{3/2}-R^{3/2}\right)\end{equation*}

If r_0=R_S and R=0 for the time we reach the singularity, then


This calculation can be performed for a D-dimensional black hole with d space-like dimensions. If you define the higher dimensional version of the Schwarzschild radius r_s=r_S(D) the analogue integration is about

(3)   \begin{equation*}\tau_{BH}^f=\dfrac{(D-2)}{(D-1)R_S^{(D-3)/2}c}\left(r_0^{(D-1)/2}-R^{(D-1)/2}\right)\end{equation*}

and similarly


A variation of this time (but you will be likely death before the singularity arrives), can be done if you use the same geodesic equation above but with initial E=0 (or a big test mass so E<<m), then the integral is lightly different and you have to be careful to evaluate it. I will calculate it only in the usual D=4 spacetime to compare it with the previous result:


The result is

    \[\boxed{\tau_f=\dfrac{\pi}{2}\left(\dfrac{R_s}{c}\right)=\dfrac{\pi GM}{c^3}}\]

The difference is not big, since \tau_f=3\pi\tau/4, so we can be sure that we will be pushed into the singularity in about that order of time. Of course, the caveats are the life of the passenger and that quantum gravity should be taken into account at some point in the interior of the black hole. Maybe even before, according to the firewall paradigm.

What is next? Small problem: what is the density for a spherical exoplanet, moon or compact object to stay weightlessness in the equator surface? For 3d space, we can equalize

    \[F_g=F_c\leftrightarrow \dfrac{GMm}{r^2}=m\dfrac{v^2}{r}\]

and plugging v=2\pi r/T and V=4\pi r^3/3, \rho=M/V, then we will obtain

    \[\dfrac{4\pi G r^3\rho}{3r^2}=\dfrac{4\pi^2 r^2}{rT^2}\]

and thus

    \[\boxed{\rho=\dfrac{3\pi}{GT^2}=\dfrac{3\pi f^2}{G}}\]



By the other hand, our Universe is mysterious. Just like when we have different types of gravitational lensing (strong lensing, weak lensing and microlensing),via e.g. a simple equation


the study of the gravitational waves just began. Perhaps, we will have new tools to test even the black hole entropy of our almost de Sitter Universe. The dS Universe entropy is given by


For Keplerian orbits, the same argument than the previous one will help us to find the gravitational wave frequencies. The orbital frequency for quasi-circular keplerian orbits will be


The gravitational wave frequency in Einstein theory is twice the orbital frequency, i.e., f_{GW}=2f_K, and then


or, in terms of density,

    \[\boxed{f_{GW}=2f_{K}=\dfrac{1}{\pi}\sqrt{\dfrac{4\pi G\rho}{3}}\sim\sqrt{G\rho}}\]

We want to compute this quantity from some reference scale, e.g., solar mass scale. Then, since R_s=2GM/c^2 and M_\odot are the Schwarzschild radius and the solar mass respectively, by ratios we can calculate

    \[f_ {GW}=\dfrac{\sqrt{G}}{\pi}\left[\left(\dfrac{M}{R^3}\right)\left(\dfrac{M_\odot}{M_\odot}\right)\left(\dfrac{R_S^3}{R_S^3}\right)\right]^{1/2}\]


    \[f_ {GW}=\dfrac{c^3}{2\sqrt{2}\pi GM_\odot}\left(\dfrac{M_\odot}{M}\right)^{\frac{3}{2}}\left(\dfrac{M}{M_\odot}\right)^{\frac{1}{2}}\left(\dfrac{R_S}{R}\right)^{\frac{3}{2}}\]

and then

    \[\boxed{f_ {GW}=\dfrac{c^3}{2\sqrt{2}\pi GM_\odot}\left(\dfrac{M_\odot}{M}\right)\left(\dfrac{R_S}{R}\right)^{\frac{3}{2}}}\]

Therefore, the dominant gravitational wave frequency for quasicircular keplerian orbits reads off as

    \[\boxed{f_{GW}\approx 2.29\cdot 10^4\left(\dfrac{M_\odot}{M}\right)\left(\dfrac{R_S}{R}\right)^{\frac{3}{2}}\mbox{Hz}\simeq 23kHz\left(\dfrac{M_\odot}{M}\right)\left(\dfrac{R_S}{R}\right)^{\frac{3}{2}}}\]

For periods

    \[\boxed{T_{GW}=\dfrac{1}{f_{GW}}=43.7\mu s\left(\dfrac{M_\odot}{M}\right)\left(\dfrac{R_S}{R}\right)^{\frac{3}{2}}}\]

In the end, unless quantum gravity changes the rules, any gravitational bounded system will decay gravitationally! The time for the gravitational coalescence of two orbiting bodies can also be computed from General Relativity for any (even eccentric) orbit. If initially the two bodies, with masses M_1, M_2 have a separation a and eccentricity e, then the time till coalescence will be

    \[t_{GW}^c=\dfrac{5}{256}\dfrac{c^5a^4f(e)}{G^3 M_1M_2(M_1+M_2)}\]

withe the eccentricity function


Since a is the major semi-axis, the apoastron and periastron centers will be R=a(1+e), r_p=a(1-e). Also, define the chirp mass


Then, the time to coalescence is

    \[\boxed{t_{GW,c}\approx 10^5Gyr\left(\dfrac{1}{AU}\right)^4\left(\dfrac{10M_\odot}{M_1}\right)\left(\dfrac{10M_\odot}{M_2}\right)\left(\dfrac{20M_\odot}{(M_1+M_2)}\right)\left(1-e^2\right)^{7/2}}\]

while the peak of the gravitational wave frequency (music) will be


See, e.g., Wen 2003, or Antonini et al. 2014. to check the above formulae. By the other hand, merger anatomy is generally complex and hard with current tools. There are phases: inspiral, plunge, merger and coalescence. The final phase is “balding hair” of the black hole (in the case of binary black hole mergers). After plunge and merger phase end, there is also a new phase called “ring-down”. The final product is a Kerr black hole in general. At least from the current knowledge of gravity, general relativity and black hole theory. Kerr black hole is perturbed and the so-called quasinormal modes are emitted. For quadruple QNM (quasinormal mode), from the paper PRD 34, 384 (1986), you can get the values



    \[\tau_{QNM}=20\mu s\left(\dfrac{M}{M_\odot}\right)\dfrac{(1-j)^{-0.45}}{1-0.63(1-j)^{0.3}}\]

where j=a/M_f is the Kerr parameter for the final state. In this field, there are also zoom-whirls. Zoom-whirl orbits are perturbations of unstable circular orbits that exist within the inner stable circular orbits (ISCO). The number of whirls n is related to perturbation magnitude \delta r and instability exponent \gamma via e^n\propto \vert\delta r\vert^{-\gamma}. Orbit taxonomy and classification of Kerr-like black hole orbits are also possible. It is generally defined a rational number q=\omega+\nu/z, where \omega is the number of whirls, and z is the number of leaves that make up the zooms, while \nu is the sequence in which the leaves are traced out (\nu/z<1). Any non-closed orbit is arbitrarily close to some periodic orbit. If a=J/M_fc, then j=a/M_f=J/M_f^2c.

There are two more places where gravitational waves are important. Firstly, for the so-called Kozai-Lidov resonances in ternary systems. When a binary system is perturbed by a third body, the latter can induce some periodic oscillations in the libration of the orbital ellipse and a variation in the orbital eccentricity. The typical Kozai-Lidov oscillations can be calculated to have a period

    \[T_{KL}=\dfrac{2T_0^2}{3\pi T}\left(1-e_0^2\right)^{3/2}\left(\dfrac{m_1+m_2+m_0}{m_0}\right)\]

where T is the period of the triplet’s inner orbit (keplerian), T_0 is the period of the triple’s outer orbit (keplerian).

Finally, we have the so-called stochastic gravitational wave background, a buzz in GW caused by non-resolved GW sources. This is different from the so-called gravitational or graviton relic background (the cosmic analogue of the cosmic microwave background). Just as the current CMB has a temperature T_\gamma\sim 3K, and the cosmological neutrino background has a temperature T_\nu\leq 2K, the relic graviton background is expected to be T_g\leq 1K. For the stochastic background, however, we define the energy density

    \[\Omega_{GW}(f)=\dfrac{1}{\rho_c}\left(\dfrac{ d\rho_{GW}}{d\ln f}\right)\]

and the gravitational wave cosmological stochastic density to be

    \[\rho=\dfrac{c^2}{32\pi G}\left<\dot{h}_{ab}\dot{h}^{ab}\right>\]

Therefore, the strain spectrum and the strain scale are respectively


    \[\boxed{h(f)=6.3\cdot 10^{-22}\sqrt{\Omega_{GW}(f)}\left(\dfrac{100Hz}{f}\right)^{3/2} Hz^{-1/2}}\]

Where do all of those equations come from? Consider a small perturbation of the spacetime metric

    \[\delta \eta_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu}\]

The Einstein’s  Field Equations read off

    \[R_{\mu\nu}+\dfrac{1}{2}g_{\mu\nu}=\dfrac{8 \pi G}{c^4}T_{\mu\nu}\]

In vacuum T_{\mu\nu}=0 implies that \square^2\Psi\equiv \square\Psi can be used to compute

    \[\partial_\nu h^\nu_\mu(x)-\dfrac{1}{2}\partial_\mu h^\nu_\nu=0\]

and with

    \[\square=\dfrac{1}{c^2}\dfrac{\partial^2}{\partial t^2}-\nabla^2\]

it yields

    \[\square \overline{h}_{\mu\nu}=0\]

and plus sources

    \[\square \overline{h}_{\mu\nu}=-\dfrac{16 \pi G }{c^4}T_{\mu\nu}\]

The simplest solution to a wave-equation is the plane wave

    \[h_{\mu\nu}(x)=\begin{pmatrix} h_+ & h_\times\\ h_\times & -h_+\end{pmatrix}\exp \left(i(kz-\omega t)\right)\]

With many simplifications, the solution is rewritte as the variation of a quadrupole moment

    \[h^{ij}(t,x)\sim \dfrac{2G}{c^4 r}\dfrac{d^2}{dt^2}\left(I^{ij}(t-r/c)\right)\]

There, the distance to the source is r and the quadrupole moment is delayed or computed in the so-called retarded time. This is easier than the entropic uncertainty principle H(x)+H(p)\geq \ln (e\pi) for sure. Gravitational waves from binaries, during the inspiral phase, is just elementary physics from these considerations. First, assum edge on observation (\theta=i=\pi/2) such as \cos\theta=04. Then, you can check that (t_r=r-r/c is the retarded time):


    \[h_\times=h_\times (t,\theta,\psi,r)=\dfrac{4G\mu \omega^2_Ka^2}{c^4r}\cos\theta\sin\left(2\omega_Kt_r+\psi\right)\]

Now, define

    \[h=\sqrt{h_+^2+h_\times^2}=\dfrac{4G\mu \omega_K^2a^2}{c^4r}\sqrt{\dfrac{1+\cos^2\theta}{4}+\cos^2\theta}\]

and then, with the previous conditions, deduce that


where \mu=m_1m_2/(m_1+m_2), and with the aid of Kepler third law you get

    \[h=\dfrac{4G^2}{c^4}\left(\dfrac{\mu M}{ra}\right)=\dfrac{4G^2m_1m_2}{c^4ra}=\dfrac{4}{c^4 }\left(\mathcal{M}G\right)^{5/3}\left(\dfrac{\omega_{GW}}{2}\right)^{2/3}\]

with M=m_1+m_2. Now, resume the formulae

(4)   \begin{eqnarray*}\mathcal{M}_c=\dfrac{(m_1m_2)^{3/5}}{M^{1/5}}\\ M=m_1+m_2\\ \mu=\dfrac{m_1m_2}{M}\\ \eta=\dfrac{m_1m_1}{M^2}\\ T_K=\dfrac{2\pi}{\omega_K}=\left[\dfrac{4\pi^2a^3}{GM}\right]^{1/2}\\ a=\dfrac{GM^{1/3}}{\omega^{2/3}_K}\\ \omega_{ISCO}=\dfrac{2c^3}{6^{3/2}GM}\\ \omega_{GW}=2\omega_K\\ a_{\rm ISCO} = 3\times\frac{2G(m_1+m_2)}{c^2}\end{eqnarray*}


    \[h = \dfrac{4 G^{5/3}}{c^4} \dfrac{m_1m_2}{(m_1+m_2)^{1/3}}\omega_{ orb}^{2/3} = \dfrac{4 G^{5/3}}{c^4} m_{ chirp}^{5/3}\omega_{orb}^{2/3}\]


    \[h\propto \left[\frac{m_{\rm chirp}}{{\rm M}_\odot}\right]^{5/3} \left[\frac{P_{\rm b}}{{\rm hours}}\right]^{-2/3} \left[\frac{r}{{\rm kpc}}\right]^{-1}\]

Moreover, we can calculate the gravitational wave power radiated in the GW emission as

    \[\begin{aligned}P &= \frac{{\rm d}E_{\rm orb}}{{\rm d} t} = -\frac{{\rm d}}{{\rm d} t}\left[\frac{Gm_1m_2}{2a}\right] = \frac{Gm_1m_2}{2}\frac{1}{a^2}\frac{{\rm d}a}{{\rm d}t}=\\&=\frac{32}{5}\frac{G^4}{c^5}\frac{1}{a^5}(m_1m_2)^2(m_1+m_2)\quad\text{from the quadrupole formula}\\&=\frac{32}{5}\frac{c^5}{G}\left[\frac{Gm_{\rm chirp}\omega_{\rm GW}}{2c^3}\right]^{10/3}\end{aligned}\]

and the time varying angular frequency will be

    \[\frac{{\rm d}\omega_{\rm GW}}{{\rm d}t} = \omega_{\rm GW}^{11/3}m_{\rm chirp}^{5/3}\]

from which 

    \[\omega_{\rm gw} = \left[\frac{64}{5\times2^{2/3}}\right]^{-3/8}\left[\frac{Gm_{\rm chirp}}{c^3}\right]^{-5/8}t_{\rm GW}^{-3/8}\]

and for quasicircular orbits, you get the previously mentioned value

    \[t_{\rm GW}\sim\frac{5}{256}\frac{c^5}{G^3}\frac{a^4}{(m_1m_2)(m_1+m_2)}\]

while the general formula in the literature is

    \[t_{\rm GW} = \frac{5}{256} \frac{c^5}{G^3}\frac{a^4(1-e^2)^{7/2}}{(m_1m_2)(m_1+m_2)}\]

From Peter and Mathews, we also have

(5)   \begin{eqnarray*}\begin{aligned}\langle\frac{{\rm d} E}{{\rm d} t}\rangle &= -\frac{32}{5}\frac{G^4m_1^2m_2^2(m_1+m_2)}{c^5a^5(1-e^2)^{7/2}}\left(1+\frac{73}{24}e^2+\frac{37}{96}e^4\right)\\\langle\frac{{\rm d} L}{{\rm d} t}\rangle &= -\frac{32}{5}\frac{G^{7/2}m_1^2m_2^2(m_1+m_2)^{1/2}}{c^5a^{7/2}(1-e^2)^2}(1+\frac{7}{8}e^2)\label{eq:angularMomentumAverageEmissionRate}\end{aligned}\\ \begin{aligned}\langle\frac{{\rm d} a}{{\rm d} t}\rangle &= -\frac{64}{5}\frac{G^3m_1m_2(m_1+m_2)}{c^5a^3(1-e^2)^{7/2}}\left(1+\frac{73}{24}e^2+\frac{37}{96}e^4\right)\label{eq:smaAverageVariation}\\\langle\frac{{\rm d} e}{{\rm d} t}\rangle &= -\frac{304}{15}e\frac{G^3m_1m_2(m_1+m_2)^3}{c^5a^4(1-e^2)^{5/2}}(1+\frac{121}{304}e^2)\label{eq:eccAverageVariation}\end{aligned}\\ \langle\frac{{\rm d} a}{{\rm d} e}\rangle = \frac{12}{19}\frac{a}{e}\frac{\left[1+\frac{73}{24}e^2 + \frac{36}{96}e^4\right]}{(1-e^2)\left[1+\frac{121}{304}e^2\right]}\\ \langle\frac{{\rm d} a}{{\rm d} t}\rangle = -\frac{64}{5}\frac{G^3m_1^2m_2^2(m_1+m_2)}{c^5a^3} \end{eqnarray*}

Integrating the semi-axis from a_0 to a, you would get

    \[\label{eq:AOfTZeroEcc} a(t) = (a_0^4-4Ct)^{1/4}\]

For the major semi-axis shrinking value we get in the literature being similar

    \[\label{eq:AOfTAllEcc}a(t) = \left(a_0^4-4C\frac{t}{(1-e^2)^{7/2}}\right)^{1/4}\]

and the integration constant


The function of radial shrinking in terms of eccentricity reads

    \[a(e)=\dfrac{\omega e^{12/19}}{1-e^2}\left[1+\dfrac{121}{304}e^2\right]^{870/2299}\]

The period decreases as




The power decrease is given by the formula


where f(e) is, as before,


Maggiore’s book on gravitational waves define





    \[\tau\sim 2.18s\left(\dfrac{1.21M_\odot}{\mathcal{M}_c}\right)^{5/3}\left(\dfrac{100Hz}{f_{GW}}\right)^{8/3}\]

See you in other blog post!


LOG#236. Exoplanets.

The Nobel Prize in Physics this year, 2019, was awared to Jim Peebles, Michel Mayor, Didier Queloz. The latter two men devoted their careers to the search of extrasolar planets, or exoplanets, for short. That is, the search of worlds around other stars where life like us, or even non like us, could develop. Astronomy has the tools now to fulfill the Giordano Bruno prophecy. There are billions of worlds out there. There is no heresy. Even when the first exoplanets were found around pulsars in 1992, the work of Mayor and Queloz provided the evidence we dreamed long ago that there are worlds out there. Wherever you see at the sky during the night, there is likely a single exoplanet per star as minimum in our galaxy. Of course, there are stars without planets, stars with several planets, even planets without stars (out from their orbits or errant in space).

This short post is to introduce you to the Science of exoplanets in simple way. I am not going too deep now, since it would require more time I don’t have now (even when I wish I had it!). Beyond this new category post, you, of course, will ask how we detect explanets. Well, there are different methods, the first one allowed Mayor and Queloz to find 51 Peg b and, after 24 years, win a Nobel Prize…Science is slow so many times…

1st. Radial velocity. At the Nobel conference, Ulf Danielson review this method. The fact that a single star has a planet orbiting around it produces a wobble in the motion of the star. The star emits light and light has an spectrum. The wobbling star shifts its spectrum due to the pull of the hidden exoplanet, so you measure a shifted spectrum. The consequence is that the radial velocity we measure from the star realizes an harmonic motion! That is, due to the exoplanet presence, the radial velocity varies periodically in time. The measurement of this radial velocity V_\star requires the measurement of period T, it is done with a time series (the repeated measurement of the velocity of the star in time), the orbital radius of the circular motion (we suppose a circular motion for simplicity, but in general, there is an eccentricity), and the mass of the system M=M_\star+m_p. Generally speaking, we do not know the mass of the exoplanet with radial velocity, but we can get a bound of its mass, since we usually do not know the orbital inclination with this method. Were the inclination known, you could derive the exoplanet mass. Generally, other type of observations are required to measure i, the orbital inclination, but it can be done. Even other methods provide a tool to derive the radius of the exoplanet (see below the transit method). Mathematically,

(1)   \begin{equation*}V_\star=\dfrac{m_p\sin(i)}{M_\star+m_p}\sqrt{G_N\dfrac{M_\star+m_p}{a}}\end{equation*}

where G_N is the gravitational constant, M_\star, m_p are the star and the exoplanet mass, i the orbital inclination, and a the orbital radius (generally the major semi-axis). Given the Kepler third law T^2=4\pi^2 a^3/GM, and M=M_\star+m_p, then you can transform the above equation into

(2)   \begin{equation*}V_\star=\left(\dfrac{2\pi G}{T}\right)^{1/3}\dfrac{m_p\sin (i)}{\left(M_\star+m_p\right)^{2/3}}\end{equation*}

The radial velocity method is useful for telescope searches, with spectrometer, and it is mainly a tool for finding large exoplanets, and close to the observer. This method is not the most popular today, as technology can now do it better with other methods, but yet it is useful to find some exoplanets or like a check or additional independent method to confirm the existence of some planet is some exosystems.

2nd. Transit method. Also known as transit photometry method. If it uses the measure of the total flux spectrum from a star. During the transit of any exoplanet through the vision line between the star and us, the light amount we receive will decrease like in our solar eclipses. The measurement of the flux variations

    \[\dfrac{\Delta F}{F_\star}=\dfrac{F_\star-F_{star}^{\mbox{transit}}}{F_\star}=\dfrac{R_p^2}{R_\star^2}=\left(\dfrac{R_p}{R_\star}\right)^2\]

and where F_\star is the flux without eclipse, and F_{star}^{\mbox{transit}} is the flux during the transiting exoplanet and the eclipse. The relative variation of the flux is proportional to the relative size of the exoplanet radius and the star radius, squared.  This method can produce false positives due to stellar activity or other bodies mimicking exoplanet transits. This method allows you to know T, a, R_p, the period, the orbital radius and the exoplanet size, but there are glares with (T,a,m_p), and yet measuring m_p,T is hard (not now as it were in the past though) and the final test is the capability to measure m_p.

3rd. Additional methods. Current technology in astronomy has even allowed and granted with more detection methods:

  • Direct imaging. Even when far, far away, we managed to directly observe some exoplanets. Of course, not with the resolution of solar system planet images, but it can be done and it will be done in the near future and beyond. Coronography (and coronographs) is part of the future tools that will allow us to see another worlds, and, perhaps, to detect life on their surfaces (even ETI).
  • Timing. It is used with pulsars (like the first exoplanet detections in 1992 around pulsar stars), binaries and multiplanetary systems as additional method.
  • Gravitational microlensing. The future searches will benefit from this technique based on general relativity. For large distances and large a, even when it requires single trial alignment with the source, it could potentially detect exoplanets and measure their masses. So, Einstein even contributed to the science of exoplanets! Doesn’t he?
  • Space gravitational wave detectors. Recently, it was even proposed that space-born gravitational wave telescopes/interferometers could even detect exoplanets under certain circumstances (specially around white dwarfs). So, it yields an additional motivation to build up space gravitational observatories!

Dyson suggested than citizens could try to search for exoplanets in low mass stars like M-stars or WD-stars. There, transits are easier to see, even when it likely requires good telescopes and lenses to see the transit around faint stars.

All hail Helvetios and Dimidium, 51 Peg b star and exoplanet, formerly known as Bellerophon.

See you in another blog post!

LOG#235. Hyperballs.

Hi, everyone. The saddest thing about job (working with teenagers and other people) is that it delays other stuff, like blogging! So, you should be patient about getting more of “my stuff”.

What is going on today? Hyperballs. Or hyperspheres and some cool variations. I have written about hyperspheres here before…Even provided formulae for volume (and area) from the. You know, from higher dimensions, as you can get hyperparalletope or cross-polytope with hypervolume V_n=\prod_i X_i, for the known sphere S^2 you get V=\dfrac{4}{3}\pi R^3 and the more general formula for the nd hypersphere or usual hyperball volume

(1)   \begin{equation*}V_n=\dfrac{\Gamma(1/2)^n}{\Gamma(\dfrac{n}{2}+1)}R^n\end{equation*}

where \Gamma(1/2)=\sqrt{\pi} and \Gamma(n)=(n-1)!. The hyperarea can be obtained from the recurrence in nd from lowering dimensions and a useful derivative gadget tool:

(2)   \begin{equation*}A_{n-1}=\dfrac{dV_n}{dR}=\dfrac{n\pi^{n/2}R^{n-1}}{\Gamma(n/2+1)}\end{equation*}

For any 3d ellipsoid you also can derive that, given a,b,c: V=\dfrac{4}{3}\pi abc, and similar formulae in higher dimensional hyperellipsoids, following the pattern V(HE)=V_D(1)\prod R_i, where V_D(1) is the hypervolume of the unit hypersphere and R_i are the hyperellipsoid (HY) semiaxes.

By the other hand, you do not need to keep things so simple, you can even change the norm in \mathbb{R}. Thus, having a vector (x_1,\cdots,x_n) in L_p with norm

    \[\vert x\vert_p=\left(\sum_{i=1}^n\vert x_i\vert^p\right)^{1/p}\]

then the so-called p-normed hyperball volume in nd follows:


In particular, you get V^1_n=\dfrac{2^n}{n!}R^n y V^\infty_n=(2R)^n, and those match the expressions for the cross-polytope and the n-cube. Other possible generalization is the next one. For any real positive numbers you can even define the balls:

    \[B_ {p_1, \ldots, p_n} = \left\{ x = (x_1, \ldots, x_n) \in \mathbf{R}^n : \vert x_1 \vert^{p_1} + \cdots + \vert x_n \vert^{p_n} \le 1 \right\}\]

Since Dirichlet times, mathematicians know the general formula for these hyperballs/hyperspheres:

    \[ V(B_{p_1, \ldots, p_n}) = 2^n \frac{\Gamma\left(1 + \frac{1}{p_1}\right) \cdots \Gamma\left(1 + \frac{1}{p_n}\right)}{\Gamma\left(1 + \frac{1}{p_1} + \cdots + \frac{1}{p_n}\right)}\]

Enough balls today? Not yet! I wish! I am showing you in a moment why calculus rocks. And not a usual calculus indeed only. Fractional calculus is a variation from common calculus where you can get non-integer derivatives, even irrational, complex or more complicated definitions! Before that, let me remember you as caution that \Gamma(\nu)=(\nu-1)!. And know define the Riemann-Liouville operator (fractional derivative):

(3)   \begin{equation*} D^{-\nu}f\equiv \dfrac{1}{\Gamma(\nu)}\int_0^\sigma \left(\sigma-y\right)^{\nu-1}f(y)dy\end{equation*}

Take now f(y)=1. Wow. Then,

(4)   \begin{equation*} D^{-\nu}(1)\equiv \dfrac{1}{\Gamma(\nu)}\int_0^\sigma \left(\sigma-y\right)^{\nu-1}dy=\dfrac{\sigma^\nu}{\Gamma(1+\nu)}\end{equation*}

and then, you obtain the partial result


Now, insert \sqrt{\pi^N} with \nu=N/2 and \sigma=R^2, then

    \[\sqrt{\pi^N} D^{-N/2}(1)=\dfrac{\sqrt{\pi^N}}{\Gamma\left(\dfrac{N}{2}+1\right)}\left(\sqrt{\sigma}\right)^N\]

so, you finally deduce that

(5)   \begin{equation*}\boxed{V_N(R=\sqrt{\sigma})=\dfrac{\Gamma(1/2)^NR^N}{\Gamma(N/2+1)}=\Gamma\left(\dfrac{1}{2}\right)^ND^{-N/2}(1)=\left(-\dfrac{1}{2}\right)!^ND^{-N/2}(1)}\end{equation*}

or equivalently

(6)   \begin{equation*}\boxed{V_N=\dfrac{\Gamma(1/2)^NR^N}{\Gamma(N/2+1)}=\Gamma\left(\dfrac{1}{2}\right)^ND^{-N/2}(1)=\dfrac{D^{-N/2}(1)}{\pi^{-N/2}}}\end{equation*}

The fractional recurrence

    \[V_N(\sigma)=\left(\dfrac{1}{\pi}\dfrac{\partial}{\partial \sigma}\right)^{-1/2} V_{N-1}(\sigma)\]

with \sigma=R^2 holds and note that the general Riemann-Liouville fractional derivative

    \[_a D^{-\nu}_x f(x)\equiv \dfrac{1}{\Gamma(\nu)}\int_a^x \left(x-y\right)^{\nu-1}f(y)dy\]

has gaps or poles, in principle, at values \nu=0,-1,-2,\ldots since \Gamma functions have singularities at negative integers, including zero.

See you in another wonderful blog post!

LOG#234. Quilibrium theory?

Quantum Mechanics, is it unbreakable? Is it effective or fundamental? Could it be an approximation to another theory? We do not know the ultimate word about that. However, up to current date (circa 2019), it is fundamental. Any trial and experiement to go beyond quantum mechanics or any clever experiment done to crack it have failed. Quantum theory has remained essentially in the same framework during the last 70 years. It has accomplished too much, so, even if you find out any theory going beyond quantum mechanics, you have to reproduce it as certain limit. No way to keep its successful experimental tests. Perhaps, we should accept it as it is. the greatest minds of all time found it hard yet. Should we?

When compared to General Relativity, a theory that can be thought full of non-equilibrium states and equilibrium states, Quantum Mechanics (QM), according to the Born rule \vert\Psi\vert^2=P(x,t) should be thought as an equilibrium probability distribution. This is the idea and thought of A. Valentini, a researcher who has spent quite of his research as scientist devoted to find out a generalization of QM. I am not if he is right, and I am afraid he is not, but anyway his ideas are worthy a blog post.

Let me first point out and old mantra about classical preconceptions of atoms. Classical fields predict that accelerated charges do radiate electromagnetic waves. The Larmor formula gives you the powert or energy loss due to that emission:

(1)   \begin{equation*} P=\dfrac{Q^2a^2}{6\pi\varepsilon_0 c^3}\end{equation*}

A lengthy calculation allows you to obtain the decay time of any electron (or generally any charged particle) into an atom (via an integral):

(2)   \begin{equation*} t_c=\dfrac{4\pi^2\varepsilon_0^2m^2c^3r_0^3}{e^4}\end{equation*}

For the hydrogen time, that time is about 10 picoseconds. So atoms are classically unstable. QM solves the classical instability thanks to a radical new set of rules that make charged particles to be able of orbiting the nuclei in a sensitive way compatible with experimental expectations. However, QM is what we call a local (sometimes non-local!) theory based on the idea that particles are really “probability waves”. Observers collapse the wave states when measuring any quantum observable. This uncertainty, while unobserved, is the keypoint of QM that otherwise is a deterministic theory that allows you evolve quantum wave functions unitarily.

Antoni Valentini suggests that QM is really a particular simplification of a much more general theory based on non-equilibrium distributions that do not satisfy P=\vert\Psi\vert^2. This is a radical idea, since I do not know how to get probability conservation from it, but he states that a relaxation and unrelaxation of the Universe provides the approximation where, we know, QM is true. But out of it, there is non-equilibrium.

Let me first introduce the basic ideas of de Broglie’s pilot-wawe (later adopted by Bohm). Wave functions are defined

    \[\Psi=\vert\Psi\vert e^{iS}\]


    \[m_i=\dfrac{dx_i}{dt}=\nabla_i S\]

for any particle x_i. Then, Valentini argues,


implies QM iff P=\vert\Psi\vert^2. The equilibrium (quilibrium) theory is vindicated wherever

    \[\partial_t P+\nabla\cdot \left(P\dfrac{\nabla S}{m}\right)=0\]

you get

    \[\dfrac{\partial \vert\Psi\vert^2}{\partial t}+\nabla\cdot \left(\vert\Psi \vert^2\dfrac{\nabla S}{m}\right)=0\]

as the equilibrium equivalent of

    \[m\dfrac{dx}{dt}=\nabla S\]

The thing is that, well, there is no beyon the speed of light in QM because of the equilibrium condition P=\vert\Psi\vert^2. You can not beat the Heisenberg Uncertainty Principle in QM and thus, generally speaking, you can not get superluminal stuff (at least macroscopically and large times, excepting multitime theories). Whenever you plug P(x,0)=\vert\Psi\vert^2 you will obtain an equilibrium distribution P(x,t)=\vert\Psi(x,t)\vert^2 in the framework of QM. For all time due to the unitary evolution of the hamiltonian. And the same same ensemble, with the same \Psi, can have however different x in the Bohm-deBroglie approach. However, a non-quantum non-equilibrium theory, would violate that. Non-local quantum theories in non-equilibrium (non quilibrium theories) could be the reason behind the weird phenomenon known as entanglement. In non-local non-equilibrium theories entonaglement would be caused by position dependent \Psi. How to test this idea is complicated. Moreover, QM contains entanglement in a natural way, even if we do not understand the origin of entanglement yet… However, Valentini suggests to seek out quantum noise in the Big Bang relic radiations and backgrounds. Non-equilibrium theory would hint as power deficits at long wavelengths in the power spectrum! Non-equilibrium theory could imply a pre-inflationary phase in the very very early (planckian?) Universe. And it could imprint signals and the posterior inflationary phase, and influence the way in which the CMB decoupled. However, no plain approach and concrete prediction of what signals are to my knowledge.

Other concerning issue is related to the fact the space (spacetime) is expanding. Every part of the Universe can be seen as a single system inside a higher dimensional space. Non-equilibrium theory and its particles could be used to beat the HUP and send particles faster than the speed of light (in the absence of multiple times). Also, non-equilibrium particles would reestablish the absolute time, so I find it hard to conciliate that idea with a standard theory of relativity. Why do we stack in equilibrium though? Why Born rule is true? Is there really an acceptable non-equilibrium theory?

Everything we know about the Universe comes from QM or its current incarnation, the Standard Model and the cosmological standard model (LCDM), that also uses QM in a subtle way. If there is a subquantum/ultraquantum or non-equilibrium substrate of what we know as reality it will be highly non-trivial, and very hard to test (even not even wrong). QM assumes that particles are also waves. Waves do experiment different types of phenomena:

  • Superposition.
  • Linearity and non-linearity.
  • Dimension and dimensionality (generally integer, but fractional waves do exist). Fractals and multifractal waves could be considered.
  • Dispersion.
  • Diffraction.
  • Refraction.
  • Reflection.
  • Polarization (transversal waves only).
  • Modulation.
  • Resonance.
  • Amplification (and radiance or superradiance).
  • Coherence.
  • Interference.
  • Diffusion.
  • Attenuation, friction or damping.
  • Forcing.
  • Reverberation.

Some words of reverberation, dedicated to S. Hossenfelder.  Reverberation is, for sound, the persistence of sound after the sound is produced. We could say our Universe made of quantum fields/waves is the reverberation of something that existed before space, before time, whatever they were. Perhaps we are wrong, and spacetime are forever. But the Hawking radiation, if true, shows that even spacetime decays…Sadly, we are doomed if spacetime is not eternal. About reverberation, again, the wikipedia says, that there is a typical measure of it, called reverberation time:

“(…)Reverberation time is a measure of the time required for the sound to “fade away” in an enclosed area after the source of the sound has stopped. When it comes to accurately measuring reverberation time with a meter, the term T60 (an abbreviation for Reverberation Time 60dB) is used. T60 provides an objective reverberation time measurement. It is defined as the time it takes for the sound pressure level to reduce by 60 dB, measured after the generated test signal is abruptly ended(…)”.

There are about  5 types or reverberation, dubbed room, chamber, hall, cathedral, plate and shimmer. Also, there are some paremeters in reverberation, called early reflections, reverb time, size, density, diffusion and pre-delay. For the reverb time, there is a semiempirical equation, due to Sabine (not S. Hossenfelder, sorry Sabine,;)):

(3)   \begin{equation*} RT_{60}=\dfrac{24\ln 10}{c_{20}}\left(\dfrac{V}{Sa}\right)\end{equation*}

There, Sabine (he not she) established a relationship between the T_{60} of a room, its volume, and its total absorption (in sabins). This is given by the above equation, and where is the speed of sound in the room (for 20 degrees Celsius), V is the volume of the room in cubic meters, S is the total surface area of room in meters squared, and a  is the average absorption coefficient of room surfaces, while the product Sa is the total absorption in sabins. What is a sabin????? Well, let me define sabins:

Definition 1 (Sabin). A sabin is a unit of (sound) absorption. Sabins may be defined with either imperial or metric units. One square foot of 100% absorbing material has a value of one imperial sabin, and one square metre of 100% absorbing material has a value of one metric sabin.

The total absorption A in metric sabins for a room containing many types of surface is given by:


where S_i are the areas of the surfaces in the room (in m^2), and \alpha_i are the absorption coefficients of the surfaces. However, the total absorption in sabins (and hence reverberation time) generally changes depending on frequency (which is defined by the acoustic properties of the space). The Sabine equation does not take into account room shape or losses from the sound traveling through the air (important in larger spaces). Most rooms absorb less sound energy in the lower frequency ranges resulting in longer reverb times at lower frequencies. Sabine concluded that the reverberation time depends upon the reflectivity of sound from various surfaces available inside the hall. If the reflection is coherent, the reverberation time of the hall will be longer; the sound will take more time to die out. The reverberation time RT60 and the volume V of the room have great influence on the critical distance d_c (conditional equation):

    \[d_c=\dfrac{1}{4}\sqrt{\dfrac{\gamma A}{\pi}}\approx 0.057\sqrt{\dfrac{\gamma V}{RT_{60}}\]

where critical distance d_c is measured in meters, volume V is measured in m^3, and reverberation time RT_{60} is measured in seconds. The following relationships follow from above approximation:


    \[T=\dfrac{V}{A}0.161 s\]

By the other hand, waves and particles are really two sides of a same coin: the quantum field (non-classical artifact!). Excitations of fields are particles. Fields propagate and extend their local (non-local?) influence through wave equations (Wavy motion! Wibbly wobbly timey wimey stuff!). No joke: people assumes that time is a strict progression from time to effect, but actually, from a nonlinear non subjective viewpoint, time is more like a big ball, of wibbly wobbly, timey wimey …Stuff…I wanted to write that…Lol Long time ago…You move through time and space, and dimensions! From zero dimensional objects to higher dimensional D objects and fields. Holography tells you apparent dimension could be less than real dimension reversed, that is, that real dimension is lesser than the apparent one. Just the opposite to higher dimensional theories. Perhaps, space-time is doomed and QM as well, as we discover some phenomena shedding light into all the quantum crappy stuff we have today. However, quantum mechanics seems to be true that people tried to quantize gravity…But problems remain. Our unique success approaches to quantum gravity are string theory and loop quantum gravity, not without problems both of them at current time. If spacetime is really doomed, what are the right observables or degrees of freedom? That is the crucial question for quantum gravity due to the link of black hole entropy with the microscopic, yet unveiled, microscopic fundamental degrees of freedom of spacetime. Beyond it were strings or branes, our chaosmic fate is coded into them. We really need an extension of QM and space-time going beyond QM and normal quantum field theories? Or are we missing something? Perhaps, we can only describe space-time as a fluid as its fundamental degrees of freedom are so tiny that are not accessible in any accessible future. Perhaps, we have to admit the polytropic equation is our best way, that is


is the only simple way to handle with spacetime at large scales and the quantum vacuum is just a special type of superfluid or solid. Could we stop at the level of single equations as the one by Lane-Emden? That is:


There is a mysterious connection to be explored at full power between entanglement and geometry at the level of classical algebra an geometry…The metric is just the squared version of the vielbein:

    \[g_{\mu\nu}=\eta_{ab}e^a_\mu e^b_\nu\]

This relation contains a tensor product. Just, it is the same tensor product we use in a quantum phenomenon. Yes! Entanglement everywhere! We could envision that there is a deep dictionary between entanglement classes and geometry. In fact, work of M. Duff and others hint that too. Let me be more concrete. From the above metric definition you could think in terms of a ket

    \[\vert\mu\nu>=\sum \eta_{ab}\vert e^a_\mu\rangle \vert e^b_\nu\rangle\]

such as

    \[\vert\mu>=\sum c_b\vert e^b_\mu>\equiv \sum c_a\vert a>\]

    \[\vert\nu>=\sum c_a\vert e^a_\nu>\equiv \sum c_b\vert b>\]

and more generally

    \[A=\vert \Psi>_A=A^a_\mu\vert dx^\mu>\]

    \[B=\vert \Psi>_B=B^a_{\mu_1\mu_2}\vert dx^{\mu_1\mu_2}>\]


    \[\Gamma=\vert \Psi>_\Gamma=\Gamma^a_{\mu_1\cdots\mu_n}\vert dx^{\mu_1\cdots\mu_n}>\]

for antisymmetric gauge fields and for  symmetric tensor fields

    \[G=\vert \Psi>_G=\sum \eta_{a_1a_2\cdots a_n}\vert e^{a_1}_{\mu_1}>\otimes\cdots\otimes\vert e^{a_n}_{\mu_j}>\]

Perhaps, our current knowledge of the global Universe as quantum object via inflation and models like eternal inflation are wrong simply because we are not doing the right calculations, as e^{-S_{dS}}e^{N_{folds}} could be calculated perhaps with a better theory. That is a challenge for the 21st century.

See you in the next blog post!


LOG#233. Electron microscopes.

Surprise! Second post today. It is a nice post, I believe.

Usually, we see the world using photons in certain wavelengths. Our eyes can see only a very limited width of the electromagnetic spectrum. The quantum revolution taught us that we can use other particles (and other wavelengths) to see the world and the Universe in ways we could have never ever imagined. This fact is even more general and can be thought valid even for gravitational waves (bunches of gravitons!).

Electron microscopy first from wikispaces… There are several types of electron microscopes:

1st. Transmission electrom microscopy. The original form of the electron microscope, the transmission electron microscope (TEM), uses a high voltage electron beam to illuminate the specimen and create an image. From wikipedia, The resolution of TEMs is limited primarily by spherical aberration, but a new generation of hardware correctors can reduce spherical aberration to increase the resolution in high-resolution transmission electron microscopy (HRTEM) to below 0.5 angstrom (50 picometres), enabling magnifications above 50 million times. The ability of HRTEM to determine the positions of atoms within materials is useful for nano-technologies research and development. TEM consists of an emission source or cathode, which may be a tungsten filament or needle, or a lanthanum hexaboride LaB_6. Cryo-TEM is the cryoscopic modification of TEM in order to do EM for biology and precision TEM imaging. Samples cooled to cryogenic temperatures and embedded in an environment of vitreous water allows useful biological studies, and it deserved a Nobel Prize in 2017, to Jacques Dubochet, Joachim Frank, and Richard Henderson “for developing cryo-electron microscopy for the high-resolution structure determination of biomolecules in solution.

2nd. Scanning electron microscopy (SEM). The scanning electron microscope (SEM) is a type of electron microscope that produces images of a sample by scanning the surface with a focused beam of electrons. The electrons interact with atoms in the sample, producing various signals that contain information about the surface topography and composition of the sample. The electron beam is scanned in a raster scan pattern, and the position of the beam is combined with the intensity of the detected signal to produce an image. In the most common SEM mode, secondary electrons emitted by atoms excited by the electron beam are detected using an Everhart-Thornley detector. The number of secondary electrons that can be detected, and thus the signal intensity, depends, among other things, on specimen topography. SEM can achieve resolution better than 1 nanometer. It can also made cryoscopic, as Wikipedia says: “(…)Scanning electron cryomicroscopy (CryoSEM) is a form of electron microscopy where a hydrated but cryogenically fixed sample is imaged on a scanning electron microscope‘s cold stage in a cryogenic chamber. The cooling is usually achieved with liquid nitrogen. CryoSEM of biological samples with a high moisture content can be done faster with fewer sample preparation steps than conventional SEM. In addition, the dehydration processes needed to prepare a biological sample for a conventional SEM chamber create numerous distortions in the tissue leading to structural artifacts during imaging(…)”.

3rd. Serial-section electron microscopy (ssEM). One application of TEM is serial-section electron microscopy (ssEM), for example in analyzing the connectivity in volumetric samples of brain tissue by imaging many thin sections in sequence.

4th. Reflection electron microscopy (REM). In the reflection electron microscope (REM) as in the TEM, an electron beam is incident on a surface but instead of using the transmission (TEM) or secondary electrons (SEM), the reflected beam of elastically scattered electrons is detected. This technique is typically coupled with reflection high energy electron diffraction (RHEED) and reflection high-energy loss spectroscopy (RHELS). Another variation is spin-polarized low-energy electron microscopy (SPLEEM), which is used for looking at the microstructure of magnetic domains.

Non-relativistic electrons have a kinetic energy

(1)   \begin{equation*} E_k=\dfrac{1}{2}mv^2\end{equation*}

where m=m_e=9.11\cdot 10^{-31}kg\sim 10^{-30}kg= 1 y\mu g. Any electron that is accelerated by a voltage \Delta V change its kinetic energy in a conservative way, so

(2)   \begin{equation*}\boxed{ \Delta E_k=-\Delta E_p=-q_e\Delta V}\end{equation*}

where the electron charge is

    \[q_e=e=-1.6\cdot 10^{-19}\]

Suppose that initially v_0=0m/s and V_0=0V, V_f=V. Then, the final kinetic energy reads

(3)   \begin{equation*}E_k(f)=\dfrac{1}{2}mv^2_f=\dfrac{p^2}{2m_2}=eV\end{equation*}

where p=mv is the non-relativistic linear momentum. In the quantum realm, any particle like the electron has an associated wave and wavelength. It is the de Broglie wavelength. And it reads

(4)   \begin{equation*}\boxed{\lambda_{db}=\dfrac{h}{mv}}\end{equation*}

Using the energy equation above, you can derive that

(5)   \begin{equation*} \boxed{p=\sqrt{2m_e eV}}\end{equation*}

and thus, you can derive the equation of the (non-relativistic) electron microscopy

(6)   \begin{equation*}\boxed{\lambda_e=\dfrac{h}{\sqrt{2m_eeV}}}\end{equation*}

Indeed, you can generalize this equation to microscope of X-particles, where X-particles are particles with mass m_X and electric charger q_X=Ze as follows:

(7)   \begin{equation*}\boxed{\lambda_X=\dfrac{h}{\sqrt{2m_XZeV}}}\end{equation*}

Good!!!! Now, some numerology. You can use the value of the Planck constant as roughly h\approx 6.63\cdot 10^{-34}J\cdot s, and then you can write

(8)   \begin{equation*}\boxed{\lambda_e=\dfrac{12.25\cdot 10^{-10}m}{\sqrt{V}}=\dfrac{1.225nm}{\sqrt{V}}}\end{equation*}

(9)   \begin{equation*}\boxed{\lambda_X=\dfrac{12.25\cdot 10^{-10}m}{\sqrt{N_XZ_XV}}=\dfrac{1.225nm}{\sqrt{N_XZ_XV}}}\end{equation*}

and where we wrote m_X=N_Xm_e and q_X=Z_Xe. If, in particular, the energy is given in eV (electron-volts), then you get

Example 1. \lambda_e=3.88pm at E=100keV.
Example 2. \lambda_e=2.74pm at E=200keV.
Example 3. \lambda_e=2.24pm at E=300keV.
Example 4. \lambda_e=1.23pm at E=1MeV.

Imagine muon microscopy, tau particle microscopy or W boson microscopy…

Now, you can enter the special relativistic electron microscope realm. Just as you have for photons (or any massless particle) E=pc and \lambda=\dfrac{hc}{E}, for any relativistic particle with mass M=m\gamma and energy E=Mc^2=m\gamma c^2 (rest mass E_0=mc^2), the kinetic energy reads E_k=T=E-mc^2=mc^2(\gamma-1), since E=T+mc^2. Again, for a conservative force set-up, \Delta E_c=-\Delta E_p=-q\Delta V=+eV. Taking into account that


the special relativity theory generalizes the de Broglie relationship (indeed, de Broglie himself used SR in his wave-particle duality!)

    \[\lambda=\dfrac{h}{m\gamma v}=\dfrac{h}{mv}\sqrt{1-\frac{v^2}{c^2}}\]

From E=mc^2\gamma you get


From p^2=m^2\gamma^2v^2=\dfrac{m^2v^2}{1-\frac{v^2}{c^2}} you obtain algebraically



(10)   \begin{equation*}\boxed{p=\sqrt{2Tm\left(1+\dfrac{T}{2mc^2}\right)}}\end{equation*}

Inserting this momentum into the relatistic de Broglie wavelength you finally derive the full relativistic electron microscope equation

(11)   \begin{equation*}\boxed{\lambda=\dfrac{h}{p}=\dfrac{h}{\sqrt{2Tm\left(1+\frac{T}{2mc^2}\right)}}}\end{equation*}

or inserting T as eV units, you also get equivalently

(12)   \begin{equation*}\boxed{\lambda=\dfrac{h}{p}=h\left[2m_eeV\left(1+\dfrac{eV}{2mc^2}\right)\right]^{-1/2}}\end{equation*}

Some numbers:

(13)   \begin{equation*}\begin{vmatrix}\mbox{Voltage}\; V(kV)& \lambda_{nr}(nm)& \lambda_r(nm)& mass (\cdot m_e)& v(\cdot 10^8ms^{-1})\\ 100& 0.00386& 0.00370& 1.196& 1.644\\ 200& 0.00274& 0.00251& 1.391& 2.086\\ 400& 0.00193& 0.00164& 1.783& 2.484\\ 1000& 0.00122& 0.00087& 2.957& 2.823\end{vmatrix}\end{equation*}

and where \lambda_{nr} is the non-relativistic wavelength and \lambda_r is the full relativistic wavelength. They are linked through the following expressions:




With simple scaling rules, you can extended the relativistic electron microscopes to relativistic X-particle microscopes as follows

(14)   \begin{equation*}\boxed{\lambda=\dfrac{h}{p}=h\left[2m_XZ_XeV\left(1+\dfrac{Z_XeV}{2m_Xc^2}\right)\right]^{-1/2}}\end{equation*}

Definition 1 (Electron microscope). \lambda_e=\dfrac{h}{\sqrt{2m_eeV}}.

Definition 2 (X-particle microscope). \lambda_X=\dfrac{h}{\sqrt{2m_XZeV}}.

Definition 3 (Electron microscope(II)). \lambda_e=\dfrac{12.25\cdot 10^{-10}m}{\sqrt{V}}=\dfrac{1.225nm}{\sqrt{V}}.

Definition 4 (X-particle microscope(II)). \lambda_e=\dfrac{12.25\cdot 10^{-10}m}{\sqrt{N_XZ_XV}}=\dfrac{1.225nm}{\sqrt{N_XZ_XV}}.

Definition 5 (Relativistic electron microscope). \lambda=\dfrac{h}{p}=h\left[2m_eeV\left(1+\dfrac{eV}{2mc^2}\right)\right]^{-1/2}.

Definition 6 (Relativistic X-particle microscope). \lambda=\dfrac{h}{p}=h\left[2m_XZ_XeV\left(1+\dfrac{Z_XeV}{2m_Xc^2}\right)\right]^{-1/2}.

LOG#232. Is it relativistic?

Today or not today. That is the point. Today. How to know if a given particle or sytem is (special) relativistic? That is a tricky question, since the reality is…Everythin is (special) relativistic. The question would be when you can use the usual newtonian aproximation. When you can and you can’t use the newtonian approximation? That is the subject today. </p>Firstly, newtonian physics or galilean relativity IS valid when you can safely say that

  • Linear momentum is linear in velocity AND mass, p=mv.
  • Kinetic energy is quadratic in velocity E_k=\dfrac{1}{2}mv^2.
  • Velocity is much lesser than the speed of light v<<c. Tricky: how much is much lesser? Without loss of generality, anything below 1\% of the speed of light is approximately relativistic but you can note special relativity in some examples, but galilean or newtonian physics is good enough for most of the cases.
  • Time is absolute an universal, t=t_0.
  • Galilean transformations hold.

Secondly, special (Einstein’s) relativity holds whenever:

  • Momentum is non-linear in velocity:

        \[P=MV=m\gamma v=\dfrac{mv}{\sqrt{1-\frac{v^2}{c^2}}}\]

  • Kinetic energy is NOT quadratic in velocity, but the total energy E=Mc^2 minus the rest energy E_0=mc^2 is


  • Velocity is close (or equal, for massless particles/systems) to the speed of light:   

        \[v\sim c\;\;v\approx c\;\; v=c\]

  • Time is NOT universal, but relative to the observer, and time gets a dilation factor (time dilation): 

        \[t=\gamma t_0\]

  • Lorentz transformations hold.

Equivalently, in galilean relativity


for a free particle, and


for a particle/system under conservative/forces.

However, in special relativity, you get

    \[P^2=p_\mu p^\mu=-m^2c^2\]



in general systems. Here, E=E_k+E_0. Then,






The momentum is relativistic when E_k\sim 2mc^2. If you define \beta=v/c, and \gamma as above, then

  • A particle is galilean/newtonian iff v<<c, \beta<<1, \gamma\sim 1, E_k=\frac{mv^2}{2}.
  • A particle is special relativistic (einsteinian) iff v=c or v\sim c, or v\approx c, E_k=m(\gamma-1)c^2=E-E_0.

Mass measuremnts are general non-linear in velocity, but invariant mass definition is possible via scalar product in Minkovski spacetime. Particles moving at exactly the speed of light are massless. This is the case of gluons, photons, and gravitons (m_g=m_\gamma=m_g=0). particles without rest mass as these particles verify E=pc, p)E/c=h/\lambda. Massive particles satisfy a more general dispersion relationship, as above


and thus

    \[E=\pm\sqrt{(pc)^2+E_0^2}=\pm E_0\sqrt{1+\left(\dfrac{pc}{E_0}\right)^2}\]


    \[E=\pm pc\sqrt{1+\left(\dfrac{E_0}{pc}\right)^2}\]

The special case in which you have a massive particle with E_0<< pc is called ultrarelativistic case, and then you can approximate

    \[E\approx \pm pc\left(1-\dfrac{1}{2}\left(\dfrac{E_0}{pc}\right)^2\right)=\pm pc\mp\dfrac{E_0^2}{2pc}\]

Here we note the purely relativistic massless case and the ultrarelativistic case easily, and we can also distinguish the massive or almost massless case in the purely relativistic case or the ultrarelativistic case:

  • Ultrarelativistic: m\approx 0, m\neq 0, E_0<<pc, and v\approx c.
  • Relativistic regime: m\approx 0, m\neq 0, E_0\sim pc, or m=0, E=pc, v=c.  Here, T\geq 2mc^2 or E=pc=E_k.
  • Non-relativistic case: m arbitrary, E_0<<pc, v<<c. Here, E_k=T<< 2mc^2.

    \[E=\pm pc\mp\dfrac{E_0^2}{2pc}\]

In the massive ultrarelativistic case:

    \[E\simeq \pm pc\mp \dfrac{pc}{2}\]

 In general, relativistic particles are

  • Generally relativistic (v\sim c) with E^2=(pc)^2+(mc^2)^2, when roughly T=E_k\geq 2mc^2.
  • Ultrarelativistic almost massless or massive with v\sim c and E=\pm pc\mp E_0/2pc. This is the case of neutrinos. E_0<<pc.

See you in other blog post!







LOG#231. Statistical tools.

Subject today: errors. And we will review formulae to handle them with experimental data.

Errors can be generally speaking:

1st. Random. Due to imperfections of measurements or intrinsically random sources.

2nd. Systematic. Due to the procedures used to measure or uncalibrated apparatus.

There is also a distinction of accuracy and precision:

1st. Accuracy is closeness to the true value of a parameter or magnitude. It is, as you keep this definition, a measure of systematic bias or error. However, sometime accuracy is defined (ISO definition) as the combination between systematic and random errors, i.e., accuracy would be the combination of the two observational errors above. High accuracy would require, in this case, higher trueness and high precision.

2nd. Precision. It is a measure of random errors. They can be reduced with further measurements and they measure statistical variability. Precision also requires repeatability and reproducibility.

1. Statistical estimators.

Arithmetic mean:

(1)   \begin{equation*}\boxed{\overline{X}=\dfrac{\displaystyle{\sum_{i=1}^n x_i}}{n}=\dfrac{\left(\mbox{Sum of measurements}\right)}{\left(\mbox{Number of measurements}\right)}}\end{equation*}

Absolute error:

(2)   \begin{equation*}\boxed{ \varepsilon_{a}=\vert x_i-\overline{x}\vert}\end{equation*}

Relative error:

(3)   \begin{equation*}\boxed{\varepsilon_r=\dfrac{\varepsilon_a}{\overline{x}}\cdot 100}\end{equation*}

Average deviation or error:

(4)   \begin{equation*}\boxed{\delta_m=\dfrac{\sum_i\vert x_i-\overline{x}\vert}{n}}\end{equation*}

Variance or average quadratic error or mean squared error:

(5)   \begin{equation*}\boxed{\sigma_x^2=s^2=\dfrac{\displaystyle{\sum_{i=1}^n}\left(x_i-\overline{x}\right)^2}{n-1}}\end{equation*}

This is the unbiased variance, when the total population is the sample, a shift must be done from n-1 to n (Bessel correction). The unbiased formula is correct as far as it is a sample from a larger population.

Standard deviation (mean squared error, mean quadratic error):

(6)   \begin{equation*}\boxed{\sigma\equiv\sqrt{\sigma_x^2}=s=\sqrt{\dfrac{\displaystyle{\sum_{i=1}^n}\left(x_i-\overline{x}\right)^2}{n-1}}}\end{equation*}

This is the unbiased estimator of the mean quadratic error, or the standard deviation of the sample. The Bessel correction is assumed whenever our sample is lesser in size that than of the total population. For total population, the standard deviation reads after shifting n-1\rightarrow n:

(7)   \begin{equation*}\boxed{\sigma_n\equiv\sqrt{\sigma_{x,n}^2}=\sqrt{\dfrac{\displaystyle{\sum_{i=1}^n}\left(x_i-\overline{x}\right)^2}{n}}=s_n}\end{equation*}

Mean error or standard error of the mean:

(8)   \begin{equation*}\boxed{\varepsilon_{\overline{x}}=\dfrac{\sigma_x}{\sqrt{n}}=\sqrt{\dfrac{\displaystyle{\sum_{i=1}^n}\left(x_i-\overline{x}\right)^2}{n\left(n-1\right)}}}\end{equation*}

If, instead of the unbiased quadratic mean error we use the total population error, the corrected standar error reads

(9)   \begin{equation*}\boxed{\varepsilon_{\overline{x},n}=\dfrac{\sigma_x}{\sqrt{n}}=\sqrt{\dfrac{\displaystyle{\sum_{i=1}^n}\left(x_i-\overline{x}\right)^2}{n^2}}=\dfrac{\sqrt{\displaystyle{\sum_{i=1}^n}\left(x_i-\overline{x}\right)^2}}{n}}\end{equation*}

Variance of the mean quadratic error (variance of the variance):

(10)   \begin{equation*}\boxed{\sigma^2\left(s^2\right)=\sigma^2_{\sigma^2}=\sigma^2\left(\sigma^2\right)=\dfrac{2\sigma^4}{n-1}}\end{equation*}

Standard error of the mean quadratic error (error of the variance):

(11)   \begin{equation*}\boxed{\sigma\left(s^2\right)=\sqrt{\sigma^2_{\sigma^2}}=\sigma\left(\sigma^2\right)=\sigma_{\sigma^2}=\sigma^2\sqrt{\dfrac{2}{n-1}}}\end{equation*}

2. Gaussian/normal distribution intervals for a given confidence level (interval width a number of entire sigmas)

Here we provide the probability of a random variable distribution X following a normal distribution to have a value inside an interval of width n\sigma.

1 sigma amplitude (1\sigma).

(12)   \begin{equation*}x\in\left[\overline{x}-\sigma,\overline{x}+\sigma\right]\longrightarrow P\approx 68.3\%\sim\dfrac{1}{3}\end{equation*}

2 sigma amplitude (2\sigma).

(13)   \begin{equation*}x\in\left[\overline{x}-2\sigma,\overline{x}+2\sigma\right]\longrightarrow P\approx 95.4\%\sim\dfrac{1}{22}\end{equation*}

3 sigma amplitude (3\sigma).

(14)   \begin{equation*}x\in\left[\overline{x}-3\sigma,\overline{x}+3\sigma\right]\longrightarrow P\approx 99.7\%\sim\dfrac{1}{370}\end{equation*}

4 sigma amplitude (4\sigma).

(15)   \begin{equation*}x\in\left[\overline{x}-4\sigma,\overline{x}+4\sigma\right]\longrightarrow P\approx 99.994\%\sim\dfrac{1}{15787}\end{equation*}

5 sigma amplitude (5\sigma).

(16)   \begin{equation*}x\in\left[\overline{x}-5\sigma,\overline{x}+5\sigma\right]\longrightarrow P\approx 99.99994\%\sim\dfrac{1}{1744278}\end{equation*}

6 sigma amplitude (6\sigma).

(17)   \begin{equation*}x\in\left[\overline{x}-6\sigma,\overline{x}+6\sigma\right]\longrightarrow P\approx 99.9999998\%\sim\dfrac{1}{506797346}\end{equation*}

For a given confidence level C.L. (generally 90\%, 95\%, 98\%, 99\%), the interval width will be 1.645\sigma, 1.96\sigma, 2.326\sigma, 2.576\sigma.

3. Error propagation.

Usually, the error propagates in non direct measurements.

3A. Sum and substraction.

Let us define x\pm \delta x and y\pm \delta y. Furthermore, define the variable q=x\pm y. The error in q would be:

(18)   \begin{equation*}\boxed{\varepsilon (q)=\delta x+\delta y}\end{equation*}

Example. M_1=540\pm 10 g, M_2=940\pm 20 g. M_1=m_1+liquid, with m_1=72\pm 1g  and M_2=m_2+liquid, with m_2=97\pm 1g. Then, we have:

M=M_1-m_1+M_2-m_2=1311g as liquid mass.

\delta M=\delta M_1+\delta m_1+\delta M_2+\delta m_2=32g, as total liquid error.

M_0=1311\pm 32 g is the liquid mass and its error, together, with 3 significant digits or figures.

3B. Products and quotients (errors).


    \[x\pm \delta x=x\left(1\pm \dfrac{\delta x}{x}\right)\]

    \[y\pm \delta y=y\left(1\pm \dfrac{\delta x}{x}\right)\]

then, with q=xy you get

(19)   \begin{equation*}\boxed{\dfrac{\delta q}{\vert q\vert}=\dfrac{\delta x}{\vert x\vert}+\dfrac{\delta y}{\vert y\vert}=\vert y\vert\delta x+\vert x\vert\delta y}\end{equation*}

If q=x/y, you obtain essentially the same result:

(20)   \begin{equation*}\boxed{\dfrac{\delta q}{\vert q\vert}=\dfrac{\delta x}{\vert x\vert}+\dfrac{\delta y}{\vert y\vert}=\vert y\vert\delta x+\vert x\vert\delta y}\end{equation*}

3C. Error in powers.

With x\pm \delta x, q=x^n, then you derive

(21)   \begin{equation*}\dfrac{\delta q}{\vert q\vert}=\vert n\vert \dfrac{\delta x}{\vert x\vert}=\vert n\vert \vert x^{n-1}\vert \delta x\end{equation*}

and if g=f(x), with the error of x being \delta x, you get

(22)   \begin{equation*}\boxed{\delta f=\vert\dfrac{df}{dx}\vert\delta x}\end{equation*}

In the case of a several variables function, you apply a generalized Pythagorean theorem to get

(23)   \begin{equation*}\boxed{\delta q=\delta f(x_i)=\sqrt{\displaystyle{\sum_{i=1}^n}\left(\dfrac{\partial f}{\partial x_i}\delta x_i\right)^2}=\sqrt{\left(\dfrac{\partial f}{\partial x_1}\delta x_1\right)^2+\cdots+\left(\dfrac{\partial f}{\partial x_n}\delta x_n\right)^2}}\end{equation*}

or, equivalently, the errors are combined in quadrature (via standard deviations):

(24)   \begin{equation*}\boxed{\delta q=\delta f (x_1,\ldots,x_n)=\sqrt{\left(\dfrac{\partial f}{\partial x_1}\right)^2\delta^2 x_1+\cdots+\left(\dfrac{\partial f}{\partial x_n}\right)^2\delta^2 x_n}}\end{equation*}


(25)   \begin{equation*}\sigma (X)=\sigma (x_i)=\sqrt{\displaystyle{\sum_{i=1}^n}\sigma_i^2}=\sqrt{\sigma_1^2+\cdots+\sigma_n^2}\end{equation*}

for independent random errors (no correlations). Some simple examples are provided:

1st. q=kx, with x\pm \delta x, implies \boxed{\delta q=k\delta x}.

2nd. q=\pm x\pm y\pm \cdots, with x_i\pm \delta x_i, implies \boxed{\delta q=\delta x+\delta y+\cdots}.

3rd. q=kx_1^{\alpha_1}\cdots x_n^{\alpha_n} would imply

    \[\boxed{\dfrac{\delta q}{\vert q\vert}=\vert\alpha_1\vert\dfrac{\delta x_1}{\vert x_1\vert}+\cdots +\vert\alpha_n\vert\dfrac{\delta x_n\vert}{\vert x_n\vert}}\]

When different experiments with measurements \overline{x}_i\pm\sigma_i are provided, the best estimator for the combined mean is a weighted mean with the variance, i.e.,

(26)   \begin{equation*}\overline{X}_{best}=\dfrac{\displaystyle{\sum_{i=n}^n}\dfrac{\overline{x}_i}{\sigma^2_i}}{\displaystyle{\sum_{i=1}^n}\frac{1}{\sigma^2_i}}\end{equation*}

The best standard deviation from the different combined measurements would be:

(27)   \begin{equation*} \dfrac{1}{\sigma^2_{best}}=\displaystyle{\sum_{i=1}^n}\frac{1}{\sigma^2_i} \end{equation*}

This is also the maximal likelihood estimator of the mean assuming they are independent AND normally distributed. There, the standard error of the weighted mean would be

(28)   \begin{equation*}\sigma_{\overline{X}_{best}}=\sqrt{\dfrac{1}{\displaystyle{\sum_{i=1}^n}\dfrac{1}{\sigma^2_i}}}\end{equation*}

Least squares. Linear fits to a graph from points using least square procedure proceeds as follows. Let (X_i, Y_i) from i=1,\ldots,n be some sets of numbers from experimental data. Then, the linear function Y=AX+B that is the best fit to the data can be calculated with Y-Y_0=\overline{A}(X-X_0), where

    \[X_0=\overline{X}=\dfrac{\sum X_i}{n}\]

    \[Y_0=\overline{Y}=\dfrac{\sum Y_i}{n}\]

    \[\overline{A}=A=\dfrac{\sum (X_i-\overline{X})(Y_i-\overline{Y})}{\sum (X_i-\overline{X})^2}\]

Moreover, B=Y_0+AX_0.

We can also calculate the standard errors for A and B fitting. Let the data be

    \[y_i=\alpha+\beta x_i+\varepsilon_i\]

We want to minimize the variance, i.e., the squared errors \varepsilon_i^2, i.e., we need to minimize

    \[Q(\alpha,\beta)=\sum_{i=1}^n\varepsilon_i^ 2=\sum_{i=1}^2\left(y_i-\alpha-\beta x_i\right)^2\]

    \[\varepsilon_i=y_i-\alpha-\beta x_i\]

Writing y=\alpha+\beta x, the estimates are rewritten as

(29)   \begin{equation*} \hat{\alpha}=\overline{y}-\hat{\beta}\overline{x} \end{equation*}

(30)   \begin{equation*} \hat{\beta}=\dfrac{\sum_{i=1}^n(x_i-\overline{x})(y_i-\overline{y})}{\sum_{i=i}^n(x_i-\overline{x})^2}=\dfrac{s_{x,y}}{s_x^2}=r_{xy }\dfrac{s_y}{s_x} \end{equation*}

where s_x, s_y are the uncorrected standard deviations of x, y samples, s_x^2, s_{x,y} are the sample variance and covariance. Moreover, the fit parameters have the standard errors

(31)   \begin{equation*} s_{\hat{\beta}}=\sqrt{\dfrac{\frac{1}{n-2}\sum_i\hat{\varepsilon}_i^2}{\sum_{i=1}^n(x_i-\overline{x})^2}} \end{equation*}

(32)   \begin{equation*} s_{\hat{\alpha}}=s_{\hat{\beta}}\sqrt{\dfrac{1}{n}\sum_{i=1}^nx_i^2}=\sqrt{\dfrac{1}{n(n-2)}\left(\sum_{i=1}^n\hat{\varepsilon}_i^2\right)\dfrac{\sum_{i=1}^n x_i^2}{\sum_{i=1}^n(x_i-\overline{x})^2}} \end{equation*}

Alternatively, all the above can be also written as follows. Define

(33)   \begin{eqnarray*} S_x=\sum x_i\\ S_y=\sum y_i\\ S_{xy}=\sum x_iy_i\\ S_{xx}=\sum x_i^2\\ S_{yy}=\sum y_i^2 \end{eqnarray*}

then, for a minimum square fit with y=\hat{\alpha}+\hat{\beta}x+\hat{\varepsilon}, we find out that

(34)   \begin{eqnarray*} \hat{\beta}=\dfrac{nS_{xy}-S_{x}S_{y}}{nS_{xx}-S_x^2} \hat{\alpha}=\dfrac{1}{n}S_y-\hat{\beta}\dfrac{1}{n}S_x\\ s_{\varepsilon}^2=\dfrac{1}{n(n-2)}\left[nS_{yy}-S_y^2-\hat{\beta}^2(nS_{xx}-S_x^2)\right]\\ s_{\hat{\beta}}^2=\dfrac{ns^2_{\varepsilon}}{nS_{xx}-S_x^2}\\ s_{\hat{\alpha}}^2=s_{\hat{\beta}}^2\dfrac{1}{n}S_{xx} \end{eqnarray*}

and where the correlation coefficient is

(35)   \begin{equation*} r=\dfrac{nS_{xy}-S_xS_y}{\sqrt{(nS_{xx}-S_x^2)(nS_{yy}-S_y^2)}}=\quad \frac{\sum\limits_{i=1}^n (x_i-\bar{x})(y_i-\bar{y})}{(n-1)s_x s_y} =\frac{\sum\limits_{i=1}^n (x_i-\bar{x})(y_i-\bar{y})} {\sqrt{\sum\limits_{i=1}^n (x_i-\bar{x})^2 \sum\limits_{i=1}^n (y_i-\bar{y})^2}} \end{equation*}

and where s_x, s_y are the corrected sample standard deviations of x, y. To know what s_{x,y} is in a more general setting, we note that the sample mean vector \mathbf{\bar{x}} is a column vector whose j-element x_{ij} is the average value of the N observations of the j-variable:

    \[ \bar{x}_{j}=\frac{1}{N}\sum_{i=1}^{N}x_{ij},\quad j=1,\ldots,K.\]

and thus, the sample average or mean vector contains the average of every variable as component, such as

(36)   \begin{equation*} \mathbf{\bar{x}}=\frac{1}{N}\sum_{i=1}^{N}\mathbf{x}_i = \begin{bmatrix} \bar{x}_1 \\ \vdots \\ \bar{x}_j \\ \vdots \\ \bar{x}_K \end{bmatrix} \end{equation*}

The sample covariance matrix is a “K”-by-“K” matrix

    \[\textstyle \mathbf{Q}=\left[ q_{jk}\right] \]

with entries

    \[q_{jk}=s_{x,y}=\frac{1}{N-1}\sum_{i=1}^{N}\left( x_{ij}-\bar{x}_j \right) \left( x_{ik}-\bar{x}_k \right)\]

where q_{jk} is an estimate of the covariance between the j-th variable and the k-th variable of the population underlying the data. In terms of the observation vectors, the sample covariance is

    \[\mathbf{Q} = s_{x,y}={1 \over {N-1}}\sum_{i=1}^N (\mathbf{x}_i.-\mathbf{\bar{x}}) (\mathbf{x}_i.-\mathbf{\bar{x}})^\mathrm{T}\]

Finally, you can also provide a calculation with confidence level of the intervals where \hat{\beta},\hat{\alpha} are. The t-vallue has a Student’s t-distribution with n-2 degrees of freedom. Using it, we can construct a confidence interval for \hat{\beta}:

    \[ \beta \in \left[\widehat\beta - s_{\widehat\beta} t^*_{n - 2},\ \widehat\beta + s_{\widehat\beta} t^*_{n - 2}\right]\]

at confidence level (C.L.) 1-\gamma, where t^*_{n - 2}  is the \left(1 \;-\; \frac{\gamma}{2}\right)\text{-th} quantile of the t_{n-2} distribution. For example, \gamma=0.05, then the C.L. is 95\%.

Similarly, the confidence interval for the intercept coefficient \hat{\alpha} is given by

    \[\alpha \in \left[ \widehat\alpha - s_{\widehat\alpha} t^*_{n - 2},\ \widehat\alpha + s_{\widehat{\alpha}} t^*_{n - 2}\right]\]

at confidence level (C.L.) 1-\gamma, where as before above

    \[s_{\widehat\alpha} = s_{\widehat{\beta}}\sqrt{\frac{1}{n} \sum_{i=1}^n x_i^2} = \sqrt{\frac{1}{n(n - 2)} \left(\sum_{i=1}^n \widehat{\varepsilon}_i^{\,2} \right) \frac{\sum_{i=1}^n x_i^2} {\sum_{i=1}^n (x_i - \bar{x})^2} }\]

Remark: for non homogenous samples, the best estimation of the average is not the arithmetic mean, but the median.

See you in other blog post!

LOG#230. Spacetime as Matrix.

Surprise! Double post today! Happy? Let me introduce you to some abstract uncommon representations for spacetime. You know we usually represent spacetime as “points” in certain manifold, and we usually associate points to vectors, or directed segments, as X=X^\mu e_\mu, in D=d+1 dimensional spaces IN GENERAL (I am not discussing multitemporal stuff today for simplicity).

Well, the fact is that when you go to 4d spacetime, and certain “dimensions”, you can represent spacetime as matrices or square tables with numbers. I will focus on three simple examples:

  • Case 1. 4d spacetime. Let me define \mathbb{R}^4\simeq \mathbb{R}^{3,1}=\mathbb{R}^{1,3}\simeq \mathcal{M}_{2x2}(\mathbb{C}) as isomorphic spaces, then you can represent spacetime X^\mu e_\mu=X as follows

(1)   \begin{equation*} \boxed{X=\begin{pmatrix} x^0+x^3& x^1+ix^2\\ x^1-ix^2& x^0-x^3\end{pmatrix}=\begin{pmatrix} x^0+x^3& z\\ \overline{z}& x^0-x^3\end{pmatrix}}\end{equation*}

and where z\in\mathbb{C}=x^1+ix^2=x^1+x^2e_2=\displaystyle{\sum_{j=1}^2}x^je_j is a complex number (e_1=1).

  • Case 2. 6d spacetime. Let me define \mathbb{R}^6\simeq \mathbb{R}^{5,1}=\mathbb{R}^{1,5}\simeq \mathcal{M}_{2x2}(\mathbb{H}) as isomorphic spaces, then you can represent spacetime X^\mu e_\mu=X as follows

(2)   \begin{equation*} \boxed{X=\begin{pmatrix} x^0+x^5& x^1+ix^2+jx^3+kx^4\\ x^1-ix^2-jx^3-kx^4& x^0-x^5\end{pmatrix}=\begin{pmatrix} x^0+x^5& q\\ \overline{q}& x^0-x^5\end{pmatrix}}\end{equation*}

and where q\in\mathbb{H} is a quaternion number q=x^1+ix^2+jx^3+kx^4=x^1+x^2e_2+x^3e_3+x^4e_4=\displaystyle{\sum_{j=1}^4}x^je_j, with e_1=1.

  • Case 3. 10d spacetime. Let me define \mathbb{R}^{10}\simeq \mathbb{R}^{9,1}=\mathbb{R}^{1,9}\simeq \mathcal{M}_{2x2}(\mathbb{O}) as isomorphic spaces, then you can represent spacetime X^\mu e_\mu=X as follows

(3)   \begin{equation*} \boxed{X=\begin{pmatrix} x^0+x^9& x^1+\sum_jx^je_j\\ x^1-\sum_jx^je_j& x^0-x^9\end{pmatrix}=\begin{pmatrix} x^0+x^9& h\\ \overline{h}& x^0-x^9\end{pmatrix}}\end{equation*}

and where h\in\mathbb{O} is

h=\displaystyle{\sum_{j=1}^8}x^je_j=x^1+x^2e_2+x^3e_3+x^4e_4+x^5e_5+x^6e_6+x^7e_7+x^8e_8 is an octonion number with e_1=1.

Challenge final questions for you:

  1. Is this construction available for different signatures?
  2. Can you generalize this matrix set-up for ANY spacetime dimension? If you do that, you will understand the algebraic nature of spacetime!

Hint: Geometric algebras or Clifford algebras are useful for this problem and the above challenge questions.

Remark: These matrices are useful in

  • Superstring theory.
  • Algebra, spacetime algebra, Clifford algebra, geometric algebra.
  • Supersymmetry.
  • Supergravity.
  • Twistor/supertwistor models of spacetime.
  • Super Yang-Mills theories.
  • Brane theories.
  • Dualities.
  • Understanding the Hurwitz theorem.
  • Black hole physics.