LOG#198. Bounds for Quantum Gravity?

Previous post was about 3…Now, we have a post about some “high 5”. These five?

Why 5 and not 3? I love Japan, ikigai fan? 😉

Well, numbers are cool. Now, 5 fingers on your hand…Oh, wait, you need a multipass for the 5th element, the perfect human being…

Look at this:

I am not a Canadian citizen, but I like some “Canadian ways”. Let me begin…


The most challenging unsolved issue for ANY theoretical physicist is likely the subject of quantum gravity (QG). No matter the success (partial and uncomplete!) of any current approach can deny we are yet lacking fundamental details. Superstring/M-theory, loop quantum gravity (LQG) and other minor areas or research like phenomenological quantum gravity or extended relativities need more data. Or, maybe, look at the actual knowledge from a different fashion.

In this post, I am going to remember my readers 4 (maybe 5?) bounds pointing out that something else, engaging and IMPORTANT, is lost in our current understanding of matter and energy. Also, information. Information as key concept is growing up, specially from Quantum Information Theory (QIP). The forthcoming quantum computing and robotics, likely the A.I. (Artificial Intelligence) could change the rules of the game forever, even for theorists and experimentalists!

1st bound. The Margolus-Levitin theorem.

Coming from the stunning world of Quantum Mechanics and its wacky-wimey rules…This powerful theorem states that the processing rate of ANY quantum computing event (or likely any other form of computation!) can NOT be higher than about 6\cdot 10^{33}operations/s/joule. Equivalently:

“Any quantum system of energy E needs at least a MINIMAL TIME (chronon) t_\perp.”


So, T\geq t_\perp (at least t_\perp!), and thus, \Gamma\leq 1/t_\perp, at most! That is, there is a maximal width

    \[\boxed{\Gamma\leq \dfrac{1}{t_\perp}=\dfrac{4E}{h}=\dfrac{2E}{\hbar \pi}}\]

Remark: Margolus-Levitin theorem is what I call the “orthogonal projectable chronon”, or the orthogonal chronon time acting on the projection postulate in any orthogonal quantum measurement.

2nd bound. Landauer’s principle.

Landauer’s principle states that there is a minimum possible amount of energy required to erase one bit of information, known as the Landauer limit:

    \[E_{min}=E_0=k_BT\ln 2 \]

For an environment or reservoir at temperature T, energy E=ST must be emitted into that environment if the amount of added entropy is S. For a computational operation in which 1 bit of logical information is lost, the amount of entropy generated is at least k_B\ln 2, and so, the energy that must eventually be emitted to the environment is

    \[\boxed{E\geq k_BT\ln 2}\]

3rd bound. The Bremermann’s limit.

In QUANTUM NOISE AND INFORMATION, H. J. BREMERMANN proposed (in 1965) a bound on channel capacity, that is, a limit on quantum speed! It is based on the requirements on relativity AND quantum mechanics, plus information theory. The capacity C of a band limited channel is given by a formula due to Shannon

    \[\boxed{C=\nu_m \log_2 \left(1+\dfrac{S}{N}\right)}\]

and where S and N are the signal and noise power.

Proposition (Bremermann’s limit, 1965). Channel capacity bound: “The capacity of any closed information transmission or processing system does not exceed mc^2/h bits per second, where m is the mass of the system, c the light velocity, h is the Planck’s constant.”

Note that it links the light barrier and the quantum of action barrier. Using the channel capacity from Shannon’s theory, for a signal that is at least EQUAL to the noise, you get that C\leq \nu_m \log_2 (2)=\nu_m=mc^2/h. It is dependent, of course, upon the validity of quantum mechanics, which, as physical theories in general, is subject to modification if empirical evidence contradicting the theory should be found. Sub-quantum theories could violate this bound. And quantum gravity?

Remark: The quantity h/c^2 is the mass equivalent of a quantum of an oscillation of one cycle per second. c^2/h\approx 1.36\cdot 10^{50} bits/s/kg. The proposition above can be restated as follows. Information transmission is limited to frequencies such that the mass equivalent of a quantum of the employed frequency does not exceed the mass of the entire transmission or computing system. Put in a different way: each bit transmitted in one second requires a mass of at least the mass equivalent of a quantum of oscillation of one cycle per second. Interestingly, it seems that the forthcoming new definition of kilogram is going to use h/c^2 as the new pattern of mass! Bremermann’s also discusses the Landauer’s limit (without such as name) in the paper cited above.

4th bound. Caianiello’s maximal acceleration limit.

Trying to derive quantum mechanics from a phase-spacetime geometry, Caianiello’s main discovery is linked to the so-called maximal acceleration principle, that it states that for any mass (energy) there is a maximal acceleration (gravitational field, if you keep the equivalence principle valid in some way):


and thus

    \[\boxed{a\leq A_M=2\dfrac{Mc^3}{\hbar}}\]

Do you see the connection with 2 (maybe the three?) previous bounds?

5th bound. The Bekenstein’s bound.

From the darkest and deepest mysterious black holes, following interesting thermodynamical arguments, J. Bekenstein derived the following bound.  For any gravitating system with energy E and size R, the next entropy (information) limit holds:

    \[\boxed{S\leq \dfrac{2\pi k_B ER}{\hbar c}}\]

in joules per kelvin degrees. Equivalently, in bits, you get

    \[\boxed{I\leq \dfrac{2\pi ER}{\hbar c\ln 2}}\]

Using E=Mc^2 and plugging R in meters, M in kilograms, you get:

    \[\boxed{I\leq \dfrac{2\pi MR}{\hbar \ln 2}\approx 2.58\cdot 10^{43} MR}\]

Example: For a human brain, with a mass about 1.5kg and volume 1260cm^3, assuming spherical brain form (spherical cow jokes are allowed!), it yields

    \[I\leq 2.6\cdot 10^{42}bits\]

as maximal information stored in the brain, and it equals to the maximal information necessary in order to mimic an average human brain down to the quantum mechanical level. Assuming the brain is quantum, it should have about 2^I quantum states, or equivalently N\leq 10^{7.8\cdot 10^{41}} states! Note that the Bekenstein bound is SATURATED by black holes (with entropy S_{BH}=k_BA/4L_P^2).

The question is: are these 5 bounds independent or are they only 5 aspects of a same deep principle of the missing quantum gravity? Think by yourself about it! Are they 5 or a single one in disguise?

LOG#197. The 3 laws of the galaxies.


3…Only 3…And 3 laws…3 laws are everywhere. You do know the 3 laws of newtonian mechanics. You do know (I am sure you do) the 3 laws of the robotics. Or the 3 laws of thermodynamics. And you sure do know as well the 3 laws of Kepler describing celestial motion. All is three. And you are three? And old Minbari joke, I am sorry…Zathras would say:

This short post is going to tell you the amazing 3 laws of the galactic motion, as state to S. McGaugh, whom I dedicate this humble post…After all, the version of the 3 laws is due to him.

The 3 laws of the (spiral-like) galaxies

1st. Rotation curves tend towards asymptotic flatness. Mathematically speaking, this law can be stated as follows:

    \[\boxed{\lim_{R\rightarrow\infty} V_r=\mbox{constant} }\]

Remark: No theory so far – just data. No one can denies the flatness of rotation curves, a discovery that should have deserved a Nobel prize for Vera Rubin. Too late!

2nd. Baryonic mass scales as the fourth power of rotation velocity (Baryonic Tully-Fisher).

    \[\boxed{M_b\propto V_f^4}\]

Remark: This law can “always” be interpreted in terms of dark matter (with sufficient fine-tuning). How? It is quite simple. Begin with Newton and Kepler! You should expect in galaxies the same behavior than those seen in planets, shouldn’t you? Then

    \[F_g=F_c\rightarrow V^2=G_N\dfrac{M}{R}\]

For a disk spiral-like galaxy, define the surface density \Sigma=M/R^2. Then, squaring the newtonian law, you get


Split \Sigma=\Sigma_b+\Sigma_{DM}. Since the velocity remains CONSTANT (according to the previous 1st law of the galaxies), and you do know that baryons follow the newtonian (keplerian) motions, we are forced to assume that \Sigma_{DM}>>\Sigma_{b}. This fact is usually taken from granted from different sources, but it also a follow-up of assuming that dark matter (DM) does exist!

Remark (II): For elliptical galaxies, the (baryonic?) Faber-Jackson relation takes the place of Tully-Fisher’s. It is a relationship between luminosity and velocity dispersion. It can derived as follows

i) The gravitational potential energy of any elliptical galaxy

    \[U_g=-\alpha \dfrac{M^2}{R}=-G_N\dfrac{M^2}{R}\]

shares a link with the kinetic energy for an elliptic galaxy


if velocity is related to 1D dispersion \sigma via V^2=3\sigma^2, since


ii) Use the virial theorem, 2E_k+U_g=0 to get




iii) Assuming M/L\sim constant, a fixed mass to light ratio, and where L is the luminosity, you get

    \[R\propto \dfrac{LG_N}{\sigma^2}\]

If you also assume that the brightness B=L/4\pi R^2 is constant, then

    \[L\propto 4\pi\left(\dfrac{LG_N}{\sigma^2}\right)^2B\]

and therefore, substituting,

    \[L\propto \dfrac{\sigma^4}{4\pi G_N^2B}\]

i. e.,


Q. E. D. Note that we get the universal Faber-Jackson relation if and only if M/L and the brightness L\propto IR^2 is the same for ALL the elliptical galaxies. Quite an statement. Also, sometimes people prefer to write L=C\sigma^\gamma, where \gamma is a free parameter close to 4, the “ideal” Faber-Jackson case.

3rd. Gravitational force correlates with baryonic surface density.

    \[\boxed{-\dfrac{\partial \Phi}{\partial R}\propto \sigma_b^{1/2}}\]

Indeed, the baryonic surface density correlates with acceleration. Furthermore, the Renzo’s rule states, that: “When you see a feature in the light, you see a corresponding feature in the rotation curve.”

Remark: it might stem more naturally from a universal force law.

These 3 laws are completely general for disk galaxies. There is no exception to these laws. There are complementary statements for elliptical galaxies too.

Let me add some stunning comments made by McGaugh in his talks:

  1. The Tully-Fisher relation is not well understood in the context of dark matter. There are many hand-waving models, none of which are completely satisfactory.
  2. One expects, from basic physics, that variations in the distribution of baryonic mass should have an impact on the Tully-Fisher relation (lines). They do not (data).
  3. The residuals from Tully-Fisher are nearly to totally imperceptible, depending weakly on the choice of circular velocity measured. This causes a fine-tuning problem which is generic to any flavor of dark matter…The contribution of the baryonic and dark matter to any given point along the rotation curve must be finely balanced, like a see-saw. As the baryonic contribution increases with baryonic surface density, the dark matter contribution decreases. The two components know intimately about each other…This is the reason of Renzo’s rule and a higher fact (Sancisi 1995, private communication, see also Sancisi 2004, IAU 220, 233): “The distribution of mass is coupled to the distribution of light”.

MOdified Newtonian Dynamics (MOND) is a controversial subject for many people. However, from the phenomenological viewpoint, it works “too well” excepting some critical counter-examples. It could hint a missing code or regime in our understanding of gravity or kinematics at some (big, infrared in terms of theoretical physicists like me) distances. An idea I have suggested here  and one year ago (in 2016, at IARD), is that maybe some new principle for gravity and motion is acting over large scales. With minimal (maximal) acceleration, and a maximal (minimum) length, you have the different asymptotical laws than those in Kepler or Newton cases. Imagine a mass bounded by gravity in which there are some type of background and minimal (maximal) acceleration a_0, Newton’s second law provides:




If you square it,  you get


If you plug that G_N^2<<1, a_0^2<<1, you obtain the MONDian law


Remark: You could fit the minimal acceleration to the cosmological constant, via a_0=c\sqrt{\Lambda}. Is there any other physically appealing definition?

Imagine now you include a Hooke law term as well. Of course it is related to the cosmological constant term (but it has opposite sign to normal Hooke law from springs!). You could think the cosmic hookean term as certain class of maximal length term. By duality, some type of maximal tension/minimal tension. Mimicking the above arguments:

    \[\dfrac{v^2}{R}=\dfrac{G_NM}{R^2}+a_0+\Lambda R\]

    \[v^2=\dfrac{G_NM}{R}+a_0R+\Lambda R^2\]

Square it…

    \[\boxed{v^4=\dfrac{G_N^2M^2}{R^2}+a_0^2R^2+\Lambda^2 R^4+2G_NMa_0+2G_NMR\Lambda+2a_0\Lambda R^3}\]

I suspect that in the limit where you neglect G^2_N, a_0^2, \Lambda^2<<1, you get a correction to MOND. Interestingly, this correction to MOND are two terms proportional to the cosmological constant, that become more and more important as R grows. This generalized MOND must be provided by some quanta of spacetime…The darkons, because they are associated to dark matter and dark energy. Can the galactic motion be the first true hint of long-scale features of quantum gravity? It is a possibility I have never heard written with these words. I think so. It is a cunning and striking possibility!

Let me finish with more great S. McGaugh epic words:

Logical possibilities in the battle Dark Matter vs. MOND.

  • ΛCDM is fine; puzzling observations will be explained by complicated feedback processes.
  • MOND gets predictions right because there is something to it — dark matter doesn’t exist.
  • We have no clue what is going on.

Let me point out additional thoughts on this (nonsense?) competition and struggle:

  1. It could be true BOTH that there is “some dark matter particle” and a new kinematics/dynamics behind universal rotation curves.
  2. We need complementary approaches and more experiments to decide.
  3. Quantum gravity (QG) is hidden but it could happen that the dark stuff is related to it as well. I will not tell that QG is the only cause of flat rotation curves but it is an additional idea. What if DM are “heavy gravitons” modifying our notions of inertia and gravity at long distances? It can not be rejected at all this option!

PLOT HOLE: Not even 3? What about 0th laws? Or 4th or additional laws? Well, of course they can be stated! In fact, S. McGaugh has himself sketched a new law (read his blog entry https://tritonstation.wordpress.com/2016/09/26/the-third-law-of-galactic-rotation/, an references therein), the radial acceleration relationship for galaxies. It reads:


and where g_0=a_0\sim 10^{-10}m/s^2 is a scale that pervades the galaxies…And MOND. Maybe a QG bat-signal?

May the 3 laws of the galaxies be with you!

P. S.: What about elliptical and other exotic galaxies? Interesting issue indeed. I have no enough data for a significative answer in THIS post. However, the 3 laws stated as above, are UNIVERSAL. Don’t forget it before going into elliptical or exotic galaxies. Dwarf galaxies, however, are important, much more than elliptical galaxies.

P. S. (II): Read Stacy McGaugh and his work here http://astroweb.case.edu/ssm/!

P. S. (III): Atom fans here? No way, Kepler laws hold at atomic level more or less (“keplerian quantum mechanics” played an interesting rolo in the origin of quantum mechanics itself, via Bohr-Sommerfeld rules!)

LOG#196. Superbootstrap.

Preface (by the webpage master administrator, Amarashiki): This is the first invited post here, at TSOR. It has been written by Alejandro Rivero, a bright non-standard theoretical physicist out there (you can follow him at https://twitter.com/arivero). It has some outstanding remarks on the (super)bootstrap, compositeness, supersymmetry (SUSY) and the Standard Model (SM), and it requires only knowledge on group theory and particle physics (the Standard Model, of course) to follow it.


In the picture, Baron M. gets itself out of swampland by pulling from his own (and his horse) pigtail.

0. Introduction.

Bootstrapping is the idea of fixing some of the parameters of a model by asking for consistency in some self-referencial way. It was popular in the sixties associated to the concept of “nuclear democracy”, that all the particles were equally elementary or composed of themselves, depending of the channel where the interaction happened. In some sense the theory raises itself by pulling from its own bootstraps.

Supersymmetry (SUSY) is the idea of allowing a symmetry that transforms bosons to fermions and vice-versa; one of its consequences is that for each spin 1/2 particle the model must contain two scalars of the same charge. Such “superpartners” were expected in the LHC, but they have not appeared -yet?-, casting shadows of doubt about the usefulness of the idea.

It was never hoped that the bootstrap could be able to fix the multiplicity and quantum numbers of the particles in a model; nowadays this task is partly done by another consistency requirement, anomaly cancellation. But it happens that when combining the ideas of self-reference and supersymmetry the selection of models reduces considerably.

1. The view from above

Consider SO(32). We can factor it as flavour times colour, G^F \times SU(3), (following GellMann 1976, case 4, equation 2.18., see here http://inspirehep.net/record/112502)

    \begin{eqnarray*} (32 \times 32)_A= [1,1,1^c] + {\bf (1,24,1^c) }+ (1,1,1^c) +(1,1,8^c) + (2,5,3^c) + (2,\bar 5, \bar 3^c) \\ +{\bf [1,15,\bar3^c]} + {\bf [1, \bar {15}, 3^c]} +[1,10,6^c] + [1, \bar 10, \bar 6^c] \end{eqnarray*}

where G^F=SO(2) \times U(5) \times U(1). Or, if you prefer, branch down

    \[SO(32) \supset SO(30) \times U(1) \supset SU(15) \times U(1) \times U(1) \supset SU(5) \times SU(3) \times U(1) \times U(1)\]

to get

    \begin{eqnarray*} {\bf 496 }= (24, 8^c)_{0,0} \oplus (1,1^c)_{0,0} \oplus {\bf (24, 1^c)_{0,0}} \oplus (1,1^c)_{0,0} \\ \oplus (1, 8^c)_{0,0} \oplus (5, 3^c)_{2,2} \oplus (5, 3^c)_{2,−2} \oplus (5, 3^c)_{-2,2} ⊕ (5, 3^c)_{-2,−2}\\ \oplus {\bf (15, 3^c)_{0,4} \oplus (15, 3^c)_{0,−4}} \oplus (10, 6^c)_{0,4} \oplus (10, 6^c)_{0,−4} \end{eqnarray*}

Our interest, as we will explain in a moment is in the 15 and 24 multiplets. Breaking them further via SU(5) \supset SU(3)\times SU(2) \times U_Y(1). Then the \bf 24 descends to

    \[24 = (1, 1)_0 + (1, 3)_0 + (3, 2)_5 + (\bar 3, 2)_{−5} + (8, 1)_0\]

and with Q=\frac 15 Y it looks as three generations of scalar leptons. It starts to be interesting, do you agree? Furthermore, the \bf 15 descends as

    \[15 = (1, 3)_{−6} + (3, 2)_{−1} + (6, 1)_4 \\\]

and then extending the assignment of the U(1) charges the total \bf 15+\bar {15}+24 can be interpreted as three generations of scalar quarks and leptons:

    \[\begin{array}{l|c |r|r|c} % a/b = 1/6 or = 2/3???? irrep & N& Y_1 & Y_2& Q= \frac 1{30}Y_1 - \frac15 Y_2 \\ % Q= -\frac 2{15}Y_1 - \frac15 Y_2 \hline (6,1) & 6 & 4 & 4 & -2/3 \\ (3,2) & 6 & 4 & -1 & +1/3 \\ (1,3) & 3 & 4 & -6 & +4/3 \\ (\bar 6,1) & 6 &-4 & -4 & +2/3\\ (\bar 3,2) & 6 & -4 & 1 & -1/3\\ (1,\bar 3) & 3 & -4 & 6 & -4/3\\ (\bar 3,2)& 6 & 0 & -5 & +1 \\ (3,2) &6 &0 & 5 & -1 \\ (1,1) & &&& \\ (1,3)&12 &0& 0& 0 \\ (8,1)& &&& \\ \end{array}\]

The table is interpreted as three generations of scalars with the correct electric charge; plus three “half-generations” of a +4/3 object. It could be interesting to set also the Q of this extra (1,3) to zero, but in order to do this we would need to grant some baryon number to (6,1) and (3,2). This is possibly the most straightforward way to obtain three generations of standard model color and electric charge out of a string motivated group. I have never seen it quoted in the literature, but I have never been very conversant with string literature. From GUT point of view, this also left unnoticed because one wants to get also the chiral charge, and then one looks for spinors  in complex representations by exploring groups SO(4n +2), which excludes SO(32).

Isolated, 15+15+24 can be considered as a 54 of SO(10) or a 55 in Sp(10) or O(10). To propose an identification for the components of the 10 of SO(10) we can also branch the 10-plet of the SO(10) group first to SU(5), with 10 = 5_2 + \bar 5_{-2} and then to SU(3) \times SU(2), via 5 = (3, 1)_2 + (1,2)_{-3}. We see that the 10-plet has three elements with Q=\frac 2{30} - \frac 2{5} = -\frac 13, two elements with Q=\frac 2{30} + \frac 3{5} = \frac 23 and then the corresponding opposite charges. Abusing a bit on traditional notation -or perhaps not!-, we can call d,s,b to the elements of the triplet and u,c to the doublet. It can be said that the bosons are actually being generated from pairing “quarks” or “antiquarks”.

2. The view from below.

To upgrade the above view to something similar to the standard model, we assume that there exists a supersymmetric theory containing this same set of scalar states, and that there is a nuclear democracy in the labeling: the “Chan-Paton charges” of this theory are  a subset of the quarks. We can call them the light quarks or, indistinctly, the preons of the theory. The model itself can have more quarks, even more generations, but only the “light ones” can bin into scalar quarks and scalar leptons. So, consider r quarks of +2/3 charge and s quarks of charge -1/3. Our postulate is that they combine pairwise -at the extremes of an open string- to form consistent generations of squarks and sleptons. We can formulate this requisite in two ways:

  1. We can ask that the combination must build the same number of “up type” and “down type” diquarks. This is, that

        \[rs = s(s+1)/2 = 2n\]

    and so s= 2 r -1. Plus, we can ask r to be even, to make sure we build a pair number of scalars of each charge (we want to be able to promote them to a supersymmetric theory in the future).

  2. Or, we can go for a more general condition, that we will see implies the former one: we can ask that the total number of combinations must be an integer multiple of a single set, this is a multiple of rs. In this case we look for integer positive solutions of:

        \[rs + {s^2+s \over 2} + {r^2 + r \over 2} + (r+s)^2 -1 = K rs\]

    and then K=9. This in turns implies that one generation is composed of (K-1)/2 Dirac-like tuples -including the neutrino-, plus one extra state, with its antiparticle. This extra state can be either neutral or charged with +4/3.


In any case, either requirement fixes s=2 r -1 and then the allowed groups are  SO(2s+2r)=SO(6r -2), with r even. The smallest possible group is, as expected, SO(10), but bigger groups are possible. Whan we need now is an extra postulate to fix n_g=3. The simplest extra postulate is to ask for absence of excess neutral bosons.  With this extra requisite, the solution is unique:

    \[r=2, s=3, n_g=3\]

and G=SO(10) or similar.

Now, we can try to add SU(3) colour and in this path we scale up the group size. We notice that a full representation of O(10) \times U(3) would have a size 55 \times 9 =495 and then we suspect we can get the needed representations by breaking some group of about this number of components. Immediate candidates are SO(32) and E_8\times E_8. We have got a good match to the non-chiral quantum numbers of the standard model using the former, it is work in progress to see if using the later we get some match to the chiral numbers.

3. The global view.

A peculiar thing is that the bosons of this theory appear in equal number that the different kind of mesons known in the experimental spectroscopy. This is, of course, because the top quark does not hadronize. On the other hand, the diquarks here are not the “good quarks” but the bad ones; they have been paired with the adequate colour during the branching. An interesting idea is that we are actually seen, as mesons and diquarks, the remmants of the scalars of the supersymmetric standard model.

Remmants or hidden, what we are telling is that the scalars are composites of the quarks. So we have a theory of preons, but without preons. But of course, we could ask what is the result of applying the SUSY transformation operator to such scalars. Is it a composite fermion? I do not know, but if it is, it is a composite containing quarks. The standard model particles bootstrap, themselves, as composites of themselves.

LOG#195. Quantum theory of photon spin states.

This blog post is dedicated to my friend, Sergio Lukic. He wrote it as undergraduate student, and I got his permission to publish it here, on my blog.

This article exposes a treatment of photon polarization using the quantum features of photon states. Photons are massless particles with spin one in natural units. Using the formalism and framework of quantum statistics, we will derive known mathematical relationships.

In modern particle physics, we understand the electromagnetic field not as a continuum, but a set of discrete particles transmitting electromagnetic forces we name photons. This idea, in fact, can be generalized to any Yang-Mills field with semi-simple internal symmetry group. For instance, for SU(3) we build a Yang-Mills field whose mediator particle is the gluon. If you quantize the theory of quarks and gluons, you get the theory known as QCD (Quantum Chromodynamics), where in addition to gauge forces, you have also matter fields called quarks. Quantizing thesed fields, you get a field of quarks and gluons.

Going back to the electromagnetic field, this field of photons is described by a 4-vector potential A_\mu, when we define the target space as the usual 4D=3+1 space-time. This 4-vector is called the gauge potential, and the electromagnetic field is defined by the gauge field modulo gauge transformations A'=A+d\varphi. Then, we have 3 real degrees of freedom. For instance, choose A_0=0, then there are only 3 independent components for the electromagnetic field. Photon states have spin -1, 0, +1, apparently. We also know, as we said above, that photons are massless particles, and photons move at the speed of light. Therefore, we can delete ONE additional degree of freedom, and we have only 2 independent state spaces for its spin. In this way, the polarization state is fully described by a ray in some Hilbert state with dimension two. Usually, optical physicists call this space as Jones’ vector space. We are going to use the framework of quantum mechanics to describe it. This state space can be thought as the one having an orthogonal base, \vert +\rangle, \vert -\rangle, where, as usual, we denote \vert +\rangle as the  circular polarization state with positive helicity (left-handed), and \vert -\rangle as the circular polarization state with negative helicity (right-handed). Now, using the standard rules of Quantum Mechanics (QM), any pure state of polarization can be associated to a ray in Hilbert state given by the complex linear superposition

(1)   \begin{equation*} \boxed{\vert \Psi \rangle=z_1\vert +\rangle +z_2\vert -\rangle} \end{equation*}

These states are defined up to a constant by the quotient z_1/z_2. Mathematically, we have passed from the complex space \mathbb{C}^2 to the projective complex space \mathbb{CP}^1, using projective coordinates z_2/z_1. This projective plane is the extended complex plane \{\mathbb{C}\}+\infty, and it is generally represented by a sphere called Riemann sphere (alternatively, it has equivalent names), projecting stereographically the plane on that sphere, as the next figure shows:

This representation of spin states on the Riemann sphere was proposed by first time by the genius Ettore Majoranain 1932 (Atomi orientati in campo magnetico variabile, Nuovo Cimento, 9, 43-50). It is important to highlight the deep link of the Majorana description with the one given by Poincaré in 1892 (Theorie Mathematique de la Lumière, vol. 2., Paris, Georges Carre, 1892). In fact, different physical paradigms (or physmatics) often arrive at the same conclusions, in this case, the same geometrical construction. One of the wonderful aspects of the Majorana description is that if you write q=z_2/z_1, it represents the polarization state of a photon or the coherent photon ray (in superposition!). Moreover, p=\sqrt{q} specifies a vector on the Riemann sphere, so that the intersection of the perpendicular plane to that vector with the sphere gives us a circumference that projected onto the the complex plane defines the polarization ellipse, modulo a dilatation factor, as follows

The orientation is positive if p lives in the North hemisphere, and it is negative if it lives in the South hemisphere. Such a convention can be remembered with the conventional right-hand rule you know from electromagnetic courses. Now, introducing spherical coordinates 0\leq \theta\leq \pi and 0\leq \psi\leq 2\pi, on the Riemann sphere, we can parametrize the physical space of spin states of the photon in the next way:

(2)   \begin{equation*} \boxed{\vert \Psi\rangle=\cos (\theta/2)\vert +\rangle+e^{i\psi}\sin (\theta /2)\vert -\rangle} \end{equation*}

where we have normalized the state to 1, and we have fixed the relative phase \psi of any sate \vert \Psi\rangle in \mathbb{C}. Therefore, we can find the point (\theta,\psi) corresponding to any state. This comes with benefits. With a quantum description of the polarization state is that we can use the density matrices to describe ANY mixed state of polarization, i.e., we get a fully description of the polarization state of a photon ray even in incoherent superposition. You can compare this to the description given by the book Quantum Mechanics (3th ed., Wiley and Sons, E. Merzbacher, 1998), and to study the formalism of density matrices we are going to use here. As another example, we are going to calculate the density matrix in our space \vert +\rangle, \vert -\rangle, describing pure states. For any density matrix, using Dirac notation:

(3)   \begin{equation*} \boxed{\rho=\sum_o\vert \Psi_i\rangle p_i\langle \Psi_i\vert} \end{equation*}

and where tr(\rho)=\sum_i p_i=1. For the pure state given above, the sum is reduced to a single term, and the density matrix (check it yourself) is given by

(4)   \begin{equation*} \boxed{\rho=\rho (\theta, \psi)=\begin{pmatrix}\cos^2 (\theta/2) & \dfrac{1}{2}e^{-i\psi}\sin (\theta)\\ \dfrac{1}{2}e^{i\psi}\sin (\theta) & \sin^2 (\theta/2)\end{pmatrix}} \end{equation*}

The second example is to use this to get the polarization state of natural light. As any possible state is equally probable in the mixing, the sum of states becomes an integral onto the Riemann sphere of all the pure states of possible polarizations \rho (\theta, \psi), as given above. Using the usual measure on the sphere to perform the integral, you obtain:

(5)   \begin{equation*} \rho=\kappa \int_0^{2\pi}\int_0^\pi \begin{pmatrix}\cos^2 (\theta/2) & \dfrac{1}{2}e^{-i\psi}\sin (\theta)\\ \dfrac{1}{2}e^{i\psi}\sin (\theta) & \sin^2 (\theta/2)\end{pmatrix}\sin\theta d\theta d\psi=\kappa \begin{pmatrix}2\pi & 0\\ 0 & 2\pi\end{pmatrix} \end{equation*}

To normalize, you have only to impose tr (\rho)=1, so \kappa =1/4\pi, and finally:

(6)   \begin{equation*} \rho=\begin{pmatrix} 1/2 & 0\\ 0 & 1/2\end{pmatrix} \end{equation*}

So, the density matrix for natural polarization is proportional to the unit matrix. Moreover, this polarization state coincides with the experience and the maximal entropy expectation, since as it is the most probable, it has the biggest (maximum) entropy, i.e., S=-k_B tr(\rho\ln\rho)=k_B \ln 2, for an arbitray polarization state. It can be also derived from the condition:

    \[\dfrac{dS}{dp}=\dfrac{d}{dp}\left(-k_B\left[p\ln p+(1-p)\ln (1-p)\right]\right)=0\]

and where p and 1-p are the self-values of the density matrix \rho. Imposing this condition, the only value is that with p=1/2, and it is a maximum since S(1/2)=k_B\ln 2> S(1)=S(0)=0, as you can check yourself.

Observables in this density matrix formalism are polarization measurements, as \rho=P is an hermitian operator, P^+=P, associated to polarization measurements with eigenvectors

    \[\vert 1\rangle=\cos (\theta/2)\vert +\rangle+e^{i\psi}\sin (\theta/2)\vert -\rangle\]

    \[\vert 0\rangle=\sin (\theta/2)\vert +\rangle-e^{i\psi}\cos (\theta/2)\vert -\rangle\]

Polarization operatores, a.k.a. density matrices in this context, are projectors (P^2=P), and they evolve no unitarily the states unless they are eigenstates, since

    \[\vert \Psi\rangle \rightarrow P\vert\Psi\rangle\]

The evolution of any polarization state after passing through a polarizator P, will be

    \[\rho_f=P\rho_iP^{+}/tr(P\rho_i P^+)\]

and, as you can probe yourself

    \[tr(P\rho_iP^+)=\sum_i p_i\vert\langle \varphi_p\vert \Psi_i\rangle\vert^2\]

with P=\vert\varphi_p\rangle\langle\varphi_p\vert. The intensity will be

    \[I_f=I_itr(P\rho_i P^+)\]

and you can iterate the process any time you get a polarizator. After n polarizators, with tr(\rho_i)=1, the emergent ray will become a ray with

    \[\rho_f=P_nP_{n-1}\cdots P_1\rho_i P_1^+P_2^+\cdots P_n^+/tr(P_n\cdots P_1\rho_i P_1^+\cdots P_n^+)\]

This class of non unitary density matrix evolution is named as reduction of the state vector. We are interested, in principle, in linear polarizators. \theta=\pi/2. Then,

(7)   \begin{equation*} \rho=P=\begin{pmatrix} 1/2 & e^{-i\psi}/2\\ e^{i\psi}/2 & 1/2\end{pmatrix} \end{equation*}

Let me show how evolve a polarization state described by a monochromatic natural light when it passes a linear polarizator.

    \[\rho_i=\mathbf{1}/2\rightarrow \rho_f=P\rho_i P^+=P^2/2=P/2\]

    \[tr(P\rho_iP^+)=tr P^2=1/2\]


Therefore, we get a state of linear polarization with a half of the intensity that the incident light-ray. This is, of course, a well-known result from the classical theory of light and its polarization. As practical examples, we will derive the Malus law for linearly polarizated states and we will study the effect of a quarter-wave plate on polarization states.

Example. Malus law. Let me begin with

(8)   \begin{equation*} \rho=\begin{pmatrix} 1/2 & e^{-i\psi}/2\\ e^{i\psi}/2 & 1/2\end{pmatrix} \end{equation*}

and a linear polarizator with observable

(9)   \begin{equation*} \rho=P=\begin{pmatrix} 1/2 & e^{-i\psi'}/2\\ e^{i\psi'}/2 & 1/2\end{pmatrix} \end{equation*}

The intensity after the light passes the polarizator is

    \[I_f=I_i tr(P\rho P^+)=\dfrac{1}{8}tr\left[ \begin{pmatrix} 1 & e^{-i\psi'}\\ e^{i\psi'} & 1\end{pmatrix}\begin{pmatrix} 1 & e^{-i\psi}\\ e^{i\psi} & 1\end{pmatrix}\begin{pmatrix} 1 & e^{-i\psi'}\\ e^{i\psi'} & 1\end{pmatrix}\right]\]

and then

    \[\boxed{I_f=\dfrac{I_i}{2}\left(1+\cos (\psi-\psi')\right)}\]

So this is the Malus law from density matrices! It is important to realize that \psi , \psi' have a very special relationship with the Riemann sphere picture. (z_2/z_1)^{1/2} is associated to the spatial representation of the polarization state that the complex number z_2/z_1 represents. In our case, we can see that p=\exp (i\psi /2) and p'=\exp (i\psi'/2) on the equator of the sphere, and denoting respectively a normal vector to the plane defining the linear polarization of \rho and a normal vector to the plane defining the polarization axis of our polarization device P. The relative angle \varphi between both axes, experimentally accessible, becomes \varphi=\phi/2-\psi'/2. Now, we can already rewrite the Malus law in its usual formulation, as follows:

    \[\boxed{I_f=\dfrac{I_i}{2}\left(1+\cos (2\varphi)\right)=I_i\cos^2\varphi}\]

Remark: I used polarizator as a terminator misnomer. The usual name is polarizer!

Example 2. Quarter-wave plate. Let me firstly review the \lambda/4 plate effect onto polarization states, based on experimental results:

  1. \lambda/4 plates have I_i=I_f, i.e., unit transmittance.
  2. \lambda/4 plates do not alter linear polarization states with parallel or perpendicular planes to the optical axis of the plate.
  3. \lambda/4 plates induce a phase \alpha to polarization states, typically \alpha=\pi/2. Then, linear polarization with +(-) \pi/4 plane with respect to the optical axis becomes circularly polarized with +(-) if \alpha=\pi/2 or -(+) if \alpha=-\pi/2.

We assign an evolution operator U to every \lambda/4 plate, so every state evolves

    \[\vert\Psi\rangle\rightarrow U\vert \Psi\rangle\]

    \[\rho\rightarrow U\rho U^+\]

for pure and mixed states, respectively. We have tr (\rho)=tr (U\rho u^+)\forall\rho. This holds if U\in SL(2,\mathbb{C}), and we can write

    \[U=\begin{pmatrix} a & b\\c & d\end{pmatrix}\]

with det (U)=ad-bc=1. For the state

    \[\vert \Psi\rangle=z_1\vert +\rangle+z_2\vert -\rangle\]

    \[U\vert \Psi\rangle=(az_1+bz_2)\vert +\rangle+(cz_1+dz_2)\vert -\rangle\sim \vert +\rangle+ f(q)\vert -\rangle\]

where q=z_2/z_1, and f(q)=(c+dq)/(a+bq). The conditions above imply that




and finally, for the \lambda/4 plate, you get that

    \[U=\dfrac{1}{\sqrt{2}}\begin{pmatrix} e^{i\psi'}& i\\i & e^{-i\psi'}\end{pmatrix}\]

and you can check that UU^+=1, so U is a unitary operator for SU(2). Using the representation on the Riemann sphere q\rightarrow f(q) is a rotation with angle \pi/2 defined by invariant states that are phases e^{i\psi'},-e^{i\psi'}. In summary, and generalizing all this stuff, \lambda/4 plates introduce phases \alpha onto the polarization states, they are associated to unitary operators U defined on the group SU(2), and they act on the Riemann sphere as rotations with angle \alpha! Remarkably, this is very different to the stochastic evolution of polarization observables at microscopic level. However, both operators are examples of the evolution of quantum states. The joined action of both ways of evolution can also happen. Imagine the following experiment: firstly, natural light passes through a linear polarizer, and then it passes a \lambda/4 plate, after it passes again another linear polarizer. The action can be fully defined by operators P_1, U, P_2. This experiment is usually done to get circularly polarized light, and the second polarizer acts an analyzer. Step by step, you get:

Step 1. \rho=\mathbf{1}/2, I=I_i. Natural light.

Step 2. Linear polarizer acts. \rho=P_1, I=I_i/2.

Step 3. \lambda/4 plate. \rho=MP_1M^+. I=I_itr(MP_1M^+)/2=I_i/2.

Step 4. \rho=P_2MP_1M^+P_2/tr(P_2MP_1M^+P_2). I=I_itr(P_2MP_1M^+P_2)/2.

Using spherical coordinates, the final result can be written as

    \[I_f=\left(1+\sin (\theta)\cos (2\varphi)\right)I'/2\]

You can get different final polarization states, depending on \theta, e.g., linear polarization (\theta=\pi/2), elliptical polarization (\theta=\pi/4), and circular polarization (\theta=0,\pi).

See you in the next blog post!

LOG#194. On minimons, maximons and darkons.

If you believe in both, relativity AND quantum mechanics, something that I presume you do due to experimental support, you are driven to admit that the speed of light is the maximal velocity (at least, the maximal 1-speed, in single time theories), and the limit of the quantum of action to \hbar. The limit of quantum of action also implies that any EXTENDED structure with size \sim \lambda can not exceed not only the speed of light, but also an acceleration! Usually, this is not remarked or stressed but it is true, unless you presume certain class of fuzziness and non-locality. If

    \[v\leq c\rightarrow v^2\leq c^2\rightarrow \dfrac{v^2}{\lambda}\leq\dfrac{c^2}{\lambda}\]

and from simple dimensional analysis, you get

    \[a_c\leq \dfrac{c^2}{\lambda}\]

And, if you plug \lambda=\lambda_C=\hbar/m c you get

    \[a_c\leq \dfrac{mc^3}{\hbar}\]

This is the simplest argument I know to propose a maximal acceleration principle from the combined force of special relativity AND quantum mechanics. There are other arguments. If you followed my posts about Bohrlogy and fundamental challenges and issues in physmatics, you do know I believe there is a deep link we are missing in the search of quantum gravity and the ultimate theory. We are lacking a conducting principle. I am not claiming that maximal acceleration IS the principle, but only that it is pointing out towards it. And not too many people know this!

I have often highlighted too that quantum theory is not only a theory of numbers. It is also a theory of quantization of everything (via quanta of action!). Everything is the quantum of something. That is why number theory will matter in the final ultimate theory. Nature LOVES counting!  If you admit that there is a well defined vacuum, and that there is a lower bound to the energy, plus a higher limit on the number of microstates (atoms) of space-time in any volume (or more precisely, hypersurface, according to the holographic principle), you must agree that there is a minimum quantum (the minimon, the “vacua”) of everything and there is a maximal quantum that saturates the phase space-time, the maximon. Firstly, consider the following example. The Schwarzschild radius of general relativity has also a mysterious duality and connections to minimal length. Let me write:

    \[R_S=\dfrac{2G_NM}{c^2}=\dfrac{2L_p^2 c M}{\hbar}=2\dfrac{ML_p^2}{\lambda_C}\]

where we have used L_p^2=G\hbar/c^3 as definition of Planck length. Would you say the temporal part of the Schwarzschild metric scales like 1/r or 1/r^2? Suppose you make mass and r not constant but a fluctuating quantities due to the inclusion of quantum mechanics and the Heisenberg Uncertainty Principle (HUP). Then, \Delta E\Delta r\geq \hbar c/2. Now, saturating the bound, \Delta E=\hbar c/2\Delta r. Then, from

    \[2\dfrac{ML_p^2}{\lambda_C}\rightarrow 2\dfrac{G_NM}{rc^2}\rightarrow 2\dfrac{G_N\Delta M}{\Delta r c^2}=2\dfrac{L_p^2 c \Delta E}{\hbar \Delta r c^2}\]

and from the above argument

    \[2\dfrac{L_p^2 c \Delta E}{\hbar\Delta r c^2}=\dfrac{2L_p^2 \hbar  c^2}{2c^2\hbar \Delta r^2}=\dfrac{L_p^2}{\Delta r^2}\]

Gravity plus quantum mechanics implies not only a minimal length, it also changes the way in which metric changes. Fluctuations of the metric are more sensible to squares of the Planck length. Note that you could be tempted to write g_{tt,q}\sim 1/r but it seems much more natural to say that the square of area is much more natural. You could be yourself puzzled like me, but some time ago, Jacob Bekenstein suggested something like this. Area quantization for the black hole (quantum!) spectrum. If you know loop quantum gravity, you also know that the area operator is much more natural than the volume or length operator. So, somehow, area, surface or its general ND generalization, the hypersurfaces are much more natural objects for quantum gravity that “points”, aren’t they? Who knows? Braners and stringers also know this, I am sure of it.

By the other hand, Caianiello’s papers about maximal acceleration had two big goals:

  1. Quantization as geometry in phase space-time.
  2. Quantum mechanics as geometry in some curved phase space-times.

Maximal acceleration can also be related to Sakharov limiting temperature, now more frequently quoted as Hagedorn temperature. It is very suggestive the symmetry you could get from any extended relativity theory with both, maximal 1-speed and maximal 1-acceleration:


One comment is necessary with respect to maximal acceleration and the above arguments. \lambda is not, indeed, a fundamental length, so differente forces yield, in principle, different characteristic \lambda and different fundamental lengths! You can see like an issue. But, remember not every particle travels to the speed of light. Only massless bosons do! With respect to maximal acceleration, you can write:

    \[a_M=\dfrac{\mu c^2}{m\lambda}=\dfrac{c\hbar}{2m\lambda^2}=\dfrac{2\mu^2 c^3}{m\hbar}\]

where a new dual mass \mu beyond m has been introduced, so that

    \[\lambda \mu c=\dfrac{\hbar}{2}\]

Now, we are going to derive Sakharov maximal temperature (critical Hagedorn temperature), from the maximal acceleration principle. Take \mu=m and calculate the maximal acceleration in the center of mass system. Convince yourself that


Equate this to the gravitational force


but note that in the center of mass you get ma/2=Gm^2R^2. From



In the center of mass frame


so R^2\sim \lambda^2\sim L_p^2, so the maximal acceleration is relevant at Planck scales. Via the Unruh effect:

    \[T_U=\dfrac{\hbar a}{2\pi c k_B}\]

any maximal limit to the acceleration implies a maximal temperature, as Sakharov suggested. Indeed:

    \[T_U=T_M=\dfrac{\hbar}{2\pi k_B c}a_M=\dfrac{\hbar}{2\pi k_B c}\dfrac{c^2}{\lambda}=\dfrac{1}{2\pi k_B}\sqrt{\dfrac{\hbar c^5}{G_N}}\]

Maximal temperature would be infinite if the speed of light were infinite, or the newtonian gravitational constant were zero. Exported to black hole physics, or even superstring/M-theories, maximal temperature seems to be a critical feature (critical point) that points out a phase transition to new degrees of freedom of (phase) space-time and conventional particle physics or geometry. Moreover, maximal acceleration can be seen as a feature in some generalized uncertainty principles (GUP). By duality, if extended to the full PHASE space-time, if there is a minimal length (the minimon), there is a maximal (cosmic?) length (the maximon). If there is a maximal velocity, there should be a minimal velocity (more on this in a nearby special post…). If there is a maximal acceleration, there is a minimal acceleration. So, we should have minimons and maximons of every stuff.

In 4D general relativity, you can get a natural maximal force c^4/4G_N, curiously the gravitational force in the event horizon of the Schwarzschild solution. It is natural. Furthermore, the Bronstein-Zelmanov-Okun cube also suggests a maximal power, c^5/4G_N. The recently observed first gravitational wave, GW150914 released about 4\cdot 10^{49}W<P_M, according to the aLIGO collaboration. It is sometimes discussed how much of the transient GW events turn into GW energy. One should expect that it were also emitted electromagnetic, neutrinos, and other forms of radiation. But, then, what is the origin of the minimal/maximal mass/energy or density? That is a deep unsolved problem, since the suggestion of minimons and maximons for any magnitude has not been discussed in a more general context. To my knowledge, only something similar is suggested by the Buchdahl inequalities and other black hole investigations. Quantum hypergraphs! What is going on then? In n space-time dimensions, Q\leq F/D^{n-3} are bounded. But there is also hints in superstrings. The string tension in natural units reads


If you support that F_g=c^4/4G_N is the maximal force, and equating to the above, you get a relation with the string tension. And from the Nambu-Goto action

    \[S_{NG}=F_S\int_\Sigma dxdt=\dfrac{1}{2\pi\alpha{'}}\int_\Sigma dA\]

So the maximal acceleration can be related to the maximal complexity and maximal action conjecture by Susskind et alii! Even more, this argument is much more general since you can apply it to any action, for instance, Born-Infeld theory, as follows

    \[S_{BI}=-F_S^2\int dvol\sqrt{-\det (g+2\pi\alpha^{'}F)}\]

Then, if you make a Taylor series from this, you get Maxwell theory plus corrections of order maximal tension, via relation with \alpha^{'}. The full non-linear theory implies non-perturbative states and Schwinger effect like excitations of vacuum. It is related to some critical fields, and thus, to maximal temperature as well! Of course, this could be broken at some level, OR, it could be something much more fundamental, if completely understood. In relation to all of this, there is a very curious surprising link (not always completely remarked) between Regge trajectories and Kerr-like solutions in black hole physics. Let me write 3 awesome bounds, stunningly similar. Firstly, the open relativistic string theory angular moment to mass connection:

    \[\boxed{J\leq \dfrac{\alpha^{'}}{c^3}M^2}\]

Secondly, the closed relativistic string theory angular moment to mass connection:

    \[\boxed{J\leq \dfrac{\alpha^{'}}{2c^3}M^2}\]

Thirdly, the Kerr black hole angular moment to mass bound to avoid naked singularities (and so, to keep the cosmic censorship true!):

    \[\boxed{J\leq \dfrac{G}{c}M^2}\]

If black holes are some type of rotating ring with tension, then we should expect

    \[\dfrac{G}{c}\sim \dfrac{\alpha^{'}}{c^3}\rightarrow \dfrac{1}{\alpha^{'}}\sim \dfrac{1}{Gc^2}\sim\dfrac{c^4}{G}\sim F_M\]

Therefore, Hagedorn temperature, Sakharov maximal temperature, Hawking temperature and the Unruh temperature, up to some constant, give a criticical value for those temperatures. It hints that gravity itself or fields are emergent concepts. Even space-time is effective. Any uniformly accelerated spacetime has a temperature with respect to inertial observers. By the other hand, we have found that something happens at some critical temperature. It implies a maximal acceleration and hidden dynamics there when the new degrees of freedom do appear. Duality is a key concept. Even more, duality on time dimensions are fun! Any quantum theory at non-zero temperature implies a link with TIME. Temperature and time are related via periodicity in IMAGINARY TIME, i.e., \beta=\hbar T=period. In any relativistic quantum theory, purely kinematical Unruh temperatures will naturally appear. Thus, we should expect that, in any theory with minimal length, maximal temperature and maximal acceleration are key. In fact, as I tried to say to a professor as undergraduated (even before knowing all of this stuff!), maximal acceleration (temperature) can NOT emerge from classical general relativity or classical string theory since they do NOT include it in any obvious formulation. I have no knowledge of such a formulation of classical general relativity or string/M-theory including a maximal acceleration principle from the beginning, and its effect to them. Maximal acceleration (and minimal acceleration), maximal length (minimal length) could be related to some unknown dynamics of string theory and quantum gravity. Perhaps, I can not explain myself clear. I do not know. But I am becoming more and more confident that a general min-max. principle is operating in effective theories of quantum gravity. And they are completely general. In fact, black hole physics is critical field theory in a sense. When you equate the black hole Hawking temperature to the Unruh temperature, or more interestingly, to the Schwinger temperature, you get maximal acceleration (maximal field!). If you do it yourself you will get a_M=c^4/(4G_NM), g_M=2Ec/\hbar, A_M=c^6/(4G_NE). Please, note the duality transformation in energy, and the equivalence, if you make 2c/\hbar=c^6/4G_N.

A more formal derivation of maximal acceleration can be tracked to Caianiello himself, using a more general HUP, and remarking that it implies some sort of process doubling the degrees of freedom (coordinates) in space-time to phase space-time. Let me write the HUP:

    \[\Delta E\Delta g\geq \dfrac{\hbar}{2}\vert \dfrac{dg}{dt}\vert \]

For g=v it yields

    \[\Delta E\Delta v\geq \dfrac{\hbar}{2}\vert \dfrac{dv}{dt}\vert\]


    \[a=\vert \dfrac{dv}{dt}\vert\leq \dfrac{2\Delta E \Delta v}{\hbar}\]

For some fluctuations \Delta E=E=Mc^2 (quantum of energy!), and we do know that \Delta v\leq c, by special relativity, so we have at last

    \[\boxed{a\leq \dfrac{2Mc^3}{\hbar}}\]


You can get this result, even if you do not believe in quanta of time (chronons or choraons?), in the following way

    \[\Delta v=v-\langle v\rangle=v'(0)\Delta t+v''(0)\Delta t^2+\mathcal{O}(\Delta t^3)\]

From this,

    \[\Delta v=\vert a(0)\vert \Delta t=\langle a\rangle \Delta t\]

and from classical HUP

    \[\Delta t\geq \dfrac{\hbar}{2E}\]

    \[\dfrac{\Delta v}{\langle v\rangle}\geq \dfrac{\hbar}{2E}\]

    \[\langle a\rangle\leq \dfrac{2E\Delta v}{\hbar}=\dfrac{2Ec}{\hbar}\]

and therefore again

    \[\boxed{a_M\leq \dfrac{2Mc^3}{\hbar}=\dfrac{2Ec}{\hbar}}\]

Why is this stuff important? Why should it? Well, Penrose, here https://arxiv.org/abs/1707.04169 , introduced erebons (planckons, maximons,…) as dark matter particles. Erebons (or darkons) as the quanta of dark matter or dark energy naturally arise from minimal/maximal acceleration. Even more, you could guess a MONDian (MOdified Newtonian Dynamics) associated to them. Indeed, I gave a talk about this topic about a year ago…Let me begin with MINIMAL acceleration a_0. Suppose you have a mass bounded by gravity in which there are some type of background and minimal acceleration a_0, Newton’s second law provides:




Square it, to obtain


If you assume that G_N^2<<1, a_0^2<<1, you obtain the MONDian law


Remark: You could fit the minimal acceleration to the cosmological constant, via a_0=c\sqrt{\Lambda}. Is there any other physically appealing definition?

Suppose now you include a Hooke law term as well. Of course it is related to the cosmological constant term (but it has opposite sign to normal Hooke law from springs!). You could think the cosmic hookean term as certain class of maximal length term. By duality, some type of maximal tension/minimal tension. Mimicking the above arguments:

    \[\dfrac{v^2}{R}=\dfrac{G_NM}{R^2}+a_0+\Lambda R\]

    \[v^2=\dfrac{G_NM}{R}+a_0R+\Lambda R^2\]

Square it…

    \[\boxed{v^4=\dfrac{G_N^2M^2}{R^2}+a_0^2R^2+\Lambda^2 R^4+2G_NMa_0+2G_NMR\Lambda+2a_0\Lambda R^3}\]

I claim that in the limit where you neglect G^2_N, a_0^2, \Lambda^2<<1, you get a correction to MOND. Interestingly, this correction to MOND are two terms proportional to the cosmological constant, that become more and more important as R grows. This generalized MOND must be provided by some quanta of spacetime…The darkons, because they are associated to dark matter and dark energy. Of course, this is the same reason why Penrose suggested the name erebons for his test of cyclic cosmology. The origin of a_0, \Lambda is the issue. And of course, the G_N and the relative strength of the different terms in the velocity-radius curve. I believe this law suggest certain dynamics between a_0, \Lambda, and G_N, \Lambda that could explain why MOND fails in some particular systems. Or not!  Note that you can get maximal/minimal length or acceleration whenever you write c(c/L)^n.

As final off-topic, let me talk about the differences between the generation of electromagnetic (EW) and gravitational waves (GW).  EMs require charged dipoles as generators, and they cause atoms to decay (up to ground stable states). GWs require massive bodies and quadrupole generators (since mass-energy is conserved). GWs require fast bodies. GW, in principle, travel to the speed of light, or don’t they? Some people argue now that EW do not move at the speed “of light”, they move at the maximal speed that allows the space-time itself, that would be the GW speed or c! GW have also polarizations, as EW theirselves. GW are distortions of space-time when it wobblies. Timey wimey stuff? Some GW pocket formulae for you today as bonus! Define two circular orbiting bodies a some distance r, with masses M_1, M_2. Define \mu=M_1M_2/M_1+M_2, M=M_1+M_2, and remember the Kepler 3rd law, that says


For circular orbits, the GW frequency is f_{GW}=2f_{orb}, \omega_{GW}=2\Omega, and then, the binary system radiates so that, in the quadrupole approximation,




This equation can be integrated to get



    \[K\equiv \dfrac{96M_1M_2G^{5/3}(M_1+M_2)^{-1/3}}{c^5}\]

See you in another wonderful blog post!

P.S.: Epitachyons, particles with a>a_M, could be existing entities. Indeed, as you can see from extended relativity, superluminal superaccelerated epitachyons have real energy.

P.S.(II): It seems that the SAME idea (maximons and minimons) have been discussed in the past by M. A. Markov and others. Markovian minimons are neutrinos under a Planckian seesaw we do know it can not work with current data. m_\nu=M_D^2/M_X can be explained by a dimension 5 five operator (Weinberg’s operator) containing heavy Majorana mass neutrinos (right-handed). A neutrino is kicked by a Higgs, turning it into a very heavy Majorana neutrino (unobserved), then it is kicked again and turned into a light neutrino (type I seesaw).

P. S. (III): The maximon is a mayan deity as well!

P. S. (IV): Haven’t you got enough continuous time to read this? Take a chronon or choraeon (discrete quantum of time!). In Caldirola’s theory


or equivalently

    \[\theta_0=\dfrac{2}{3}\dfrac{K_Ce^2}{mc^3}\simeq \alpha\dfrac{\hbar}{mc^2}\sim\dfrac{\hbar}{\Gamma}\]

where \Gamma is the decay width.

LOG#193. Bits on black holes (II).

The second and last blog post in this thread is about the analogies between black holes and condensed matter theory, in particular with phases of matter like fluids or superconductors. That is what explains my previous picture…And this post! Are BH geometries exotic classes of “matter”?

At current time (circa 2017), superconductors are not yet completely understood. They are a hard part of quantum many body physics!

Let me begin by something simpler you know from school. Imagine a single particle. Energy is conserved in virtue of symmetries (in particular by time-traslation invariance and the Noether theorem). Mechanical energy is, as you surely remember, the sum of kinetic energy and potential energy. E_m=E_k+E_p. Imagine now a large number of particles in a closed box. Who wins? Kinetic energy or potential energy? The answer gives you a hint of the state of the matter the particles have! If kinetic energy wins, you get a gas. If potential energy wins, you get a solid, in general regular configurations and ordered configurations are preferred (why? Good question! Let me talk about this in the future). Maximal potential energy configurations are provided by crystals or lattices. I am sure you know the conventional phase diagram for solids, liquids and gases. Here you are two, one simpler, one more complex:

Liquids are more complicated phases of normal matter, where not the kinetic energy or the potential energy wins. Liquids are complex phases “in between” solids and gases. In a liquid, there is a concrete balance between the kinetic energy and the potential energy. It is a complex and complicated balance. Molecules in a liquid become attracted but  they are not rigid like a solid!

What about quantum liquids? Quantum mechanics introduces fluctuations via the Heisenberg Uncertainty principle, \Delta X\Delta p\geq \hbar/2. Transitions between different phases or states of matter (energy) are related to fluctuations in their particle energies (average temperature and interactions). Quantum phase transitions are phase transitions happening at T=0K, at least theorically (we do know we can not reach the absolute zero).

Example 1. Mott transitions. Some metals, conductors, become insulators! Hole-particle wires explain conducting metals. The electron-electron repulsion can become strong. If so, a metal could become an insulator, something called Mott insulator. Mott insulators, under certain doping prescriptions, can become superconductors too! Doping is a parameter here. It also happens at low temperatures, close to absolute zero.

Example 2. Antiferromagnetism. Electrons have electric charge AND spin. Spin can be aligned (ferromagnetism). The magnetic field correlates with spin. There is also antiferromagnetism. More precisely, there are four phases in ferromagnetism:

  1. Ferromagnetic phase. Below some critical temperature, spins are completely aligned and parallel to the magnetic field. They form magnetic domains.
  2. Antiferromagnetic phase. Below some critical temperature, spins are aligned antiparallel in magnetic domains.
  3. Ferrimagnetic phase. Below some critical temperature, spins are aligned antiparallel and parallel, but the total magnetic field does NOT cancel.
  4. Paramagnetic phase. Spins are randomonly oriented. It is above any of the critical temperatures mentioned before.

Diamagnetism is another related behaviour. Spins tend to align parallel to external fields.  It happens in superconductors or some organic compounds, metals,…

Another quantum phase transition is quark-gluon plasma (QGP). It happened in the early Universe or the LHC (ALICE experiment) and other colliders. Also, you could try to simulate it, but you would need a supercomputer. Expensive? Yes! At least now, circa 2017. The unknown physics of QGP is related to the Hagedorn temperature, the Hagedorn transitions. Some hints of behaviour occurs there as well.

Phases of matter do matter in the History of the Universe. In Quantum Mechanics, there is an order parameter that controls the transition. For instance, antiferromagnetism at absolute zero, increasing the temperature (or changing pressure, doping), can become disordered. Weird phases of matter in between metal-insulator, like those Mott insulators I mentioned above! In between ferromagnetic and antiferromagnetic phases are just like a liquid in between solids and gases. They are phases where quantum fluctuations are important. A natural question arises: do there exist quantum liquids in between ordered and disordered phases? Yes! They exist in materials that many scientists are studying right now! For instance, of course, high temperature superconductivity. Understanding the phases in between conductors and superconductors could hold the key to understand superconductors fully! Even the recently found time crystals are examples of quantum phases! Let me put these words into pictures:

There is a relative importance of motion versus interactions in strange phases of metals. They are quantified by lifetime of excitations. Excitations are ripples in electron-hole pairs. If you have a lifetime \tau, the motion wins whenever \tau>>1. Interactions win whenever \tau<<1. These are the natural units of hole-electron timelifes. It is related to temperature T. Just like black holes! Thermal energy is related to kinetic energy. Also in light waves. So, to some extend, you could, in principle, create liquid light or “solid” light (of course, I know about liquid light but not about solid light, yet!). Thus, f=1/t, and quantum mechanics relate E=\hbar \omega=hf=2\pi \hbar/t. Long lived particles imply \tau>>1/T, where T is the temperature. You can also realize a connection between time and temperature. A subtle one! For a quantum liquid, we have \tau \approx 1/T. Doping metals, change how they behave. This is well-known in solid-state and condensed matter theory but not in theoretical physics I am going to discuss now!

Black holes were proposed originally by J. Mitchell in 1783, as places where light can not escape! A similar idea was envisioned by Laplace in 1796. The theory of general relativity, in 1915, redefined this concept better. Light also feels gravity. Black holes imply certain irreversibility, a deep concept and notion in physics, similar to the notion of entropy. At last, it seems black hole “are” entropy. We never keep track off all the details and we are doing some coarse-graining. This is the battle macrostates versus microstates. Molecules, atoms and particles are important in the distinction.

The Universe began with a low entropy state. It is crazy, because this is quite unlikely, but we think this is true. This low entropy state evolved towards a state with growing entropy. Now, some questions are got:

  1. May black holes have an entropy associated to them? Yes! I have written the formula several times in this blog. Hawking is famous by it! Entropy of any BH scales as area:

        \[S_{BH}=\dfrac{k_Bc^3A}{4G_N\hbar}=\dfrac{k_Bc^3\pi A}{2Gh}\]

    Moreover, the area theorem states that dA/dt\geq 0 for black holes, just like entropy in thermodynamics. That entropy is proportional to area and not volume, is the origin of the holographic principle. dE=TdS for black holes as well, where E=Mc^2, and Hawking proved that dM_{BH}=\kappa dA, where \kappa is the surface gravity. Therefore, black hole thermodynamics is just like any other thermodynamical theory. Or isn’t it? We are lost the microstates! We don’t know yet the fundamental degrees of freedom of spacetime, i.e., the atoms of space-time. The thermodynamical analogy is very powerful and it is now a solid field completely established. As I mentioned above, and in the previous post, Hawking’s calculation using Quantum Field Theory (QFT) and Quantum Mechanics to black holes proved they have also a temperature. Black hole temperature is a quantum effect! Note the appearance of Planck constant in the Hawking temperature! BH have a huge entropy. Even a solar mass BH has a very big entropy. The formation of black holes releases (hopefully) unknown degrees of freedom. Curiously, a solar mass black hole has about 10^{22} times the sun entropy, and there is also 10^{22}-10^{23} stars in the observed universe! Recently, Verlinde proposed the idea of entropic forces in gravity. Gravity manifests itself as irreversibility from entropy!

  2. What is a BH made of? What are the BH microstates?
  3. How do BHs react to perturbations? Charges thrown to a stationary BH do create ripples on it! They could be described by diffusion equations! Heat diffuses nd charges also diffuse through the BH event horizon! BH should be close at thermal equilibrium.
  4. Are BH a disordered medium? What kind of ordered medium are BHs? Something in between known states? Diffusion equations could provide hints into this. If n is the amount of charge (charge density), the diffusion equation reads


The constant D says how “quick” or “fast” the “bump relaxation” happens. The more inhomogeneous it is, the faster it relaxes. Are BH like metals? Are BH some type of exotic matter? Are BH topological insulators/superconductors? Are BH a strange phase of space-time? Do BH conduct charges and how? The electrical resitance \rho to the electron motion is an important quantity. We do not expect that astrophysical BH are charged, but accretion processes could provide those charges. What is the BH resistivity? If we connect a source “power” to a BH they are out of equilibrium and they do not evaporate. For instante, imagine a BH connected to some engine or source power. BH could behave like a quantum liquid in some circumstances! Classical BH do have some types of charge (M,Q,\Lambda,\ldots). \tau\sim 1/T_{BH}, D\sim 1/T_{BH}, and D\sim c^2\hbar/k_BT. How to measure a low T (very massive) BH? We have not even measured the cosmic neutrino background, about 2K, or the cosmic gravitational background, about 1K, so how to measure supermassive BH temperatures? A real problem, indeed…

In summary:

  1. There are some difficult or hard materials, such as high temperature superconductors, topological insulators/superconductors and many others.
  2. There are materials that are neither ordered nor disordered.
  3. Liquids are the classical version of quantum liquids, a harder phase of matter.
  4. Classical BH have transition phase properties: surface gravity, mass or energy, entropy, charges,…When probed, BH behave strangely, just like a medium with certain time scale \tau \sim\hbar /T=\hbar/E, neither long nor short, having not long not short times. It is like the “in between” phases of matter we have introduced here.

Duality is a big idea in current and moder theoretical physics. Traditionally, we, scientists, have been reductionalist. This is the origin of atomism (Democritus, Leucipus). Anaxagoras thought that everything is inside anything. Liquids are precisely where it is NOT clear what the building blocks are. Imagine water molecules in a turbulent river. They are complicated. Duality is the idea that a physical system admits many different descriptions that are fully equivalent. The first example of duality is the electromagnetic duality between electric charges and magnetic charges in vacuum. Another example, quite classical. In Quantum Electrodynamics (QED), you have light (photons) and matter (electrons). F\sim 1/r^2. Electromagnetic lines are made up of photons! QED works “nicely” and well because photons don’t interact strongly in normal matter, and \alpha\sim 1/137. In 1970’s a question arised in Quantum Chromodynamics (QCD): what if the fluxtube of quarks-gluons is more fundamental that the own quarks are gluons? This is, of course, the origin of string theory. Either fluxtubes are fundamental and particles are a derived or emergent concept, or particles are fundamental and fluxtubes are a derived concept. What is then a fundamental or derived concept? It depends on the context! (Just for QM interpreters: contextuality is the key?) Duality is mathematically a simple change of variables from a description using certain degrees of freedom to another variables (the dual variables). Just like phonons in a crystal (solid) are useful, duality holds whenever F\rightarrow \overline{F} a field strength is changed to its dual, and the fields transform into other dual fields. In conclusion, duality is, then, the idea that there are different descriptions of the same physics (physical system). It was a hot topic since the 2nd superstring revolution (circa 1995) and the seminal works by Maldacena (circa 1997) about the so-called AdS/CFT correspondence (another duality). The AdS/CFT correspondence is just the claim that the dynamics of certain BH with AdS geometry are dual to certain quantum liquid described by a conformal field theory (CFT), originally N=4 SYM (Super-Yang Mills) theory. Some questions with this condensed matter language would arise:

  1. Does a quantum liquid described by this AdS/CFT duality exists? Can it be simulated? Can it become superconductor or reach strange phases at low temperatures?
  2. Does a BH described by AdS/CFT geometry become superconductor or reach strange phases at low temperatures?

The last ultimate question is, of course, is a BH a superconductor or weird type of matter (geometry)? The no-hair theorem, originally by Wheeler, suggests that a BH can not be coated of doped with lots of charges or “stuff”. You can’t just put any outside a BH, if you throw something there, it wants to fall in. There are only (now we have changed a little bit this thought) parameters or charges for a BH. In principle, a BH should radiate off the charges to infinity, but classically it does not happen. This is the information paradox we are eager to solve, since it paves the way to understand quantum gravity! What kind of superconductor can a BH be? Well, that is much more complicated to answer. The first answer would be that it is like a Higgs boson. The Higgs boson has about 125 GeV of mass/energy, and a lifetime \tau\sim \hbar/\Gamma. The Higgs field is like a medium through all space and time. \Gamma_H\sim 4.1\cdot 10^{-3}GeV. Certain particles in this medium acquire masses. Different particles get a mass. In a superconductor condensate, like a Higgs condensate, mass are electrons, the Higgs boson itself, W bosons, muons,…In a superconductor state, electrons AND photons get an effective mass. The consequences of the photon becoming massive is seen in the force laws. For massless photons, F\sim 1/r^2, while for massive photons F\sim e^{-mr}/r^2. If you remember Ohm’s law for electricity, J=\sigma E, then R=1/\sigma, where \sigma is the conductivity. This is very similar to mass in a superconducting condensate. If no electric field on matter, and current flows without electric resistance, or conductivity being infinity, we arrive to a superconductor phase. In superstring theory, we also have L_P=g_s^{1/4}L_s, so superstring theory has phases as well. The BH in Maldacena’s duality are different in important ways from these examples:

  1. Box avoids things radiating out to infinity.
  2. Charged black holes (RN BH, KNdS BH) are different.
  3. The electric field can create pair particles via Schwinger mechanism. The electron wants to screen the field, gravity wants to antiscreen the field.

The higher the temperature is, the bigger the BH is…But with boxes or boundaries in AdS/CFT. This is a common example of the battle between electromagnetism (and other forces) against gravity. If a BH is big, high temperature implies gravity wins. And it gets bigger. If the BH is small, low temperature limit, the electromagnetic force wins. And it just condensates some bits of charge outside the BH. And it means a superconductive layer outside the BH could arise! And this means hair…If it is soft hair, as Hawking, Strominger and other have been proposing to solve the information paradox is something to be probed in the near future. From a phenomenological viewpoint, the question is what is the total charge seen from a far away observer, and it should be fixed. What sets the superconductor temperature of a BH, if any? What sets the critical temperature in a quantum liquid? Conventional superconductors have strongly correlated electrons (or Cooper pairs). Cooper paires are bosons but they are truly a couple of strongly paired electrons. High temperature charges related to quantum liquids in this BH “in between” phase of spacetime means that BH could be exotic superconductors. Any link to supersymmetry/SUSY? What about entanglement in superconductors or/and quantum liquids?

These questions are why I love physmatics!

P.S.: Epilogue (part II)…


LOG#192. Bits on black holes (I).


Hi, doctorish ladies and gentlemen!

I am going to teach some bits of black hole today. For the simplest static (non-rotatory) black hole, the whole space-time is fully specified by mass M, and some constants, like G_N, \hbar, c, k_B, \ln (2),\ln (10). The critical radius for a black hole is the Schwarzschild radius


The black hole area, assuming D=4=3+1 space-time, reads

    \[A_{BH}=\dfrac{16\pi G_N^2M^2}{c^4}\]

And the surface gravity at the event horizon is written as follows


It is very interesting that this surface gravity is, in fact, the maximal force guessed by the maximal force follower, divided by the black hole mass, i.e., g=F_M/M. Surface gravity creates tides with units m/s^2/m equal to:


The celebrated Bekenstein-Hawking area formula for the entropy is, as you already know if you follow my blog:

    \[S_{BH}=\dfrac{k_B c^3 A}{4G_N\hbar}=\dfrac{k_B \cdot 4\pi G_NM^2}{\hbar c}\]

with units in J/K. A note I have never done before: entropy from Boltzmann formula S=k_B\ln \Omega has dimensions of J/K, energy divided by absolute temperature. Using Shannon definition, you get

    \[H=-\sum_i p_i\ln p_i\]

using units of nats. Nats are equal to 1/\ln 2 shannons (Sh) or 1/\ln 10 hartleys, bans or dits. And 1 hartley is \log_2 (10) bit =\ln (10) nats. Therefore, you can express the BH entropy in terms of J/K, hartleys, or shannons (i.e., bits or dits as well!). Hawking’s biggest discovery was the black hole temperature, that fixed the 1/4 factor in the area law from the Bekenstein’s biggest discovery, the analogy between black holes and thermodynamics:

    \[T_{BH}=\dfrac{\hbar c^3}{8\pi G_N M k_B}\]

And, moreover, since black holes behave as blackbodies, they radiate! As they radiate, they become smalles (I am neglecting accretion, of course, from the macro-world) and they explote. The evaporation time for D=4 black holes is

    \[t_{ev}=\dfrac{5120}{\hbar c^4}M^3\]

and the black hole luminosity from a pure blackbody reads

    \[L=A\sigma T^4=\dfrac{\hbar c^2}{3840\pi R_S^2}=\dfrac{\hbar c^6}{15360\pi G_N^2M^2}\]

from a BH flux

    \[\phi=\dfrac{L}{4\pi R_S^2}=\dfrac{\hbar c^6}{61440\pi^2G_N^2R_S^2M^2}\]

As the luminosity is power, or rate of change of energy:

    \[L=-\dfrac{dE}{dt}=\dfrac{\hbar c^6}{15360\pi G_N^2M^2}\]

with E=Mc^2 becomes

    \[-\dfrac{dM}{dt}=\dfrac{\hbar c^4}{15360\pi G_N^2 M^2}\]


    \[-M^2dM=\dfrac{\hbar c^4}{15360\pi G_N^2 M^2}dt\]

    \[\int_0^{t_{ev}}dt=-\int_{M_0}^0M^2 \dfrac{15360\pi G_N^2}{\hbar c^4 }dM\]


    \[t_{ev}=\dfrac{15360\pi G_N^2}{\hbar c^4 }\int_0^{M_0}M^2dM\]

and thus (M_P is the Planck mass, and M_\odot=2\cdot 10^{30}kg is a solar mass)

    \[\boxed{t_{ev}=\dfrac{5120\pi G_N^2}{\hbar c^4}M_\star^3=\dfrac{5120\pi G_N^2 M_\odot^3}{\hbar c^4}\left(\dfrac{M}{M_\odot}\right)^3=\dfrac{5120\pi\hbar M^3}{c^2 M_P^4}\approx 2.1\cdot 10^{67}\left(\dfrac{M}{M_\odot}\right)^3yrs}\]


    \[\boxed{t_{ev}=8.410\cdot 10^{-17}\dfrac{M}{kg}s}\]

The black hole power or luminosity reads

    \[P_{BH}=L_{BH}=\dfrac{\hbar c^6}{15360\pi^2G_N^2}\]

for a solar mass black hole becomes

    \[P_{\odot BH}=L_{\odot BH}=9.007\cdot 10^{-29}W\]

A Planck mass BH evaporates in about 10^{-39}s. For a solar mass BH, you need about 10^{75}s. Primordial BH, born in the early universe withe evaporating time about 2.667Gyrs, evaporating by now, has to be about 10^{11}kg. However, taking the cosmic microwave background (CMB) temperature, and equating it to the Hawking temperature, you obtain a bound about 5\cdot 10^{22}kg\sim M_{Moon}. Black holes as dark matter (primordial black holes. PMB) have been proposed. The interesting window of mass is


Even more,…If you add extra dimensions of space, e.g. n extra space-like dimension, and you define the fundamental scale of gravity as M_D and not M_P, then you have that the evaporating time for a higher dimensional BH scales as follows

    \[t_{ev}(n)\sim \dfrac{1}{M_D}\left(\dfrac{M_{BH}}{M_D}\right)^{\frac{n+3}{n+1}}\]

Interesingly, the limit n=0 and n=\infty are “the same”, excepting by the diffusion of gravitational flux through M_D.

Black hole species have sizes:

  1. Micro BH. Mass about the moon mass. Radius about 0.1mm.
  2. Stellar BH. Mass up to tens of solar masses. Radius about 30km or a few hundreds of km.
  3. Intermediate mass BH. Yet to be discovered but hinted since LIGO GW detections and other clues. Masses since hundreds of solar masses up to almost a million of solar masses. Radii are variable. It is about Earth radius for a thousand solar mass BH.
  4. Supermassive BH. From millions to billions or more (there are people arguing about an upper bound on BH mass) of solar masses. Radii are between 0.001 and 200 AU (astronomical units).

There are other types of black holes: extremal (superextremal), type D, with cosmological constant, primordial, … Mechanisms for BH production in laboratory and/or astrophysical scenarios are also interesting. Even their simulation via analogue fluid systems or quantum computing! For Kerr or Kerr-Newmann black holes, the following bound is known

    \[Q^2+\left(\dfrac{J}{M}\right)^2\leq M^2\]

in Planck units. The equivalence between the Hawking process, the Unruh radiation and the Schwinger mechanism is also curious. Another interesting radius for BH systems are:



the photon sphere and the inner stable circular orbit radius, respectively. And


    \[T_U=\dfrac{\hbar a}{2\pi k_Bc}\]

are the Hawking and Unruh temperatures in natural units. Check what is the condition to both formulae to give the same number. Do you know what is the fastest way to get the correct BH entropy formula using basic thermodynamics. Know the Hawking temperature in natural units T_H=1/8\pi M. Knowing this, you can fix the number in BH from thermodynamics:

    \[dS=dQ/T=8\pi M dQ=8\pi M dM\]

    \[dS=8\pi M dM=d(4\pi M^2)\]


    \[S=4\pi M^2=\pi R_S^2\]

You only get the right formula if you put by hand an extra 1/4 factor. Indeed, in extra dimensions, you also get:

    \[R_S\sim \dfrac{1}{M_D}\left(\dfrac{M_{BH}}{M_D}\right)^{1/1+n}\]

    \[T_{BH}=\dfrac{n+1}{4\pi R_S}\]

    \[M_D^{n+2}=\dfrac{(2\pi R)^n}{8\pi G_{4+n}}=M_4^2/V_n\]


More on extra dimensional p-branes, not directly black holes but alike. The tension for a p-brane reads


and the YM coupling


The Dirac-Nambu action

    \[S_{Dp}=-T_{Dp}\int d^{p+1}\sqrt{\eta+ \left(\partial X\right)^2+2\pi \alpha^{'}F}\]

Note, that M-theory fixes R_{11}=l_sg_s in some way. Black branes are BH-like solutions in superstring/M-theory. In extra dimensions, the newtonian gravitational law reads off as follows

(1)   \begin{equation*} \boxed{F_D=G_D\dfrac{Mm}{R^{D-2}}=\left(\dfrac{D-3}{D-2}\right)\dfrac{8\pi \overline{G_N}}{\Omega_{D-2}}\dfrac{Mm}{R^{D-2}}} \end{equation*}

where the omega is the surface area of the unit D sphere

    \[\Omega_D=\dfrac{2\pi^{(D+1)/2}}{\Gamma \left(\frac{D+1}{2}\right)}\]

Kerr (rotating, uncharged) BH are interesting. Collide two of these Kerr BH. They will emit energy in form of electromagnetic, gravitational or any other form of radiation. The maximal efficiency is known to be (due to Hawking himself):


Charged BH are worst for efficiency process in BH thermodynamics. Hawking knows this from his paper, Gravitational radiation from colliding black holes. Rotating BH has a Hawking temperature

    \[T_{BH}(Kerr)=\dfrac{\hbar c^3}{4\pi k_B G_N M}\frac{\sqrt{1-(a/M)^2}}{1+\sqrt{1-(a/M)^2}}\]

where a=Jc/G_NM is the Kerr parameter. Take yourself a few Planck absements, A_P=L_PT_P\sim 10^{-79}ms, in order to guess that the right formula for the power of Kerr BH is

    \[P_{BH}=\dfrac{\hbar c^6}{1920}\dfrac{\left(1-(a/M)^2\right)^2}{\pi G_N^2M^2\left(1+\sqrt{1-(a/M)^2}\right)^3}\]

For a Kerr-Newmann BH (rotating, charged), the power is

    \[P_{BH,KN}=\dfrac{\hbar c^6}{240\pi G_N^2M^2}\dfrac{\left(1-\dfrac{K_CQ^2}{G_NM^2}-\left(\dfrac{a}{M}\right)^2\right)^2}{\left(2+2\sqrt{1-\dfrac{K_CQ^2}{G_NM^2}-\left(\dfrac{a}{M}\right)^2}-\dfrac{K_CQ^2}{G_NM}\right)^3}\]

To end this post, and to prepare you for the follow up post, let me speculate a little bit about BH “are” or “we think they are”:

  • Black holes are spacetime, rotating or not, charged or not, but they are highly curved space-time fully specified by some parameters or numbers.
  • Quantum BH or at least semi-classical BH has a temperature and they evaporate. If you take this as serious, the own space-time does decay, at least, highly curved space-time.
  • BH have microstates, but we don’t know for sure what they are.

That entropy of any BH seems to scale like area and not like volume, is sometimes refferred as the holographic principle. BH entropy seems to be non-extensive. Indeed, this suggests a link with condensed matter. Or even with solid state theory. Are BH “materials”? Of course, since matter and energy are equivalent when you are using relativity, they should. That charges of BH are on the boundary, it seems, and not on the full volume seems to be something like topological insulators or superconductors. The quantum theory of space-time is yet to be built. Is it a world crystal like P. Jizba and collaborators suggest? A crystal is any highly ordered microscopic structure with lattices extending in all directions of space. However, solids are generally much more complex. Polycrystals are many crystals bonded or fused together. Is space-time or a BH a polycrystal? The classification of solids in crystalline, polycrystalline and amorphous could be also useful in BH physics! Polymorphism implies many crystals or phases. There are allotropy and polyamorphism as well. Furthermore, if you extend these thoughts to quasi-crystals, you get a bigger picture of black holes. Quasicrystals are non periodic “ordered” arrays of atoms. Could BH be quasicrystals? The International Union of Crystallography (IUC) defines crystal in a very general fashion. Its definition contains ordinary periodic discrete crystals, quasicrystals, and any other system showing some periodic diffraction diagram/pattern. Crystallinity itself is any structural order material, solid of material system. This definition paves the way towards topological ordered systems. It yields and correlates with hardness, density, transparency, diffusion and other material features. Crystalites or grains are the basic pieces (atoms) of polycrystals or polycrystalline matter. There are also materials “in between” crystals and amorphous materials. They are called paracrystals. More precisely, paracrystals are short medium range ordering lattice material, similar to liquid crystal phases, but lacking ordering in one direction at least. Geologists admit, today, four levels of crystallinity:

  1. Holocrystalline.
  2. Hypocrystalline.
  3. Hypohyaline.
  4. Holohyaline.

Open question: could you guess a way to classify BH solutions with certain geological dictionary? I did it. And it is lot of fun! Let me know if you arrive to some conclusion like me.

Open question (II): the Hagedorn temperature is the temperature beyond string theory ceases to have sense. The degrees of freedom have to be redefined beyond that point. Dimensionally, check that

    \[T_H=\dfrac{1}{2\pi\sqrt{\alpha^{'}}}=\dfrac{1}{l_s}=\dfrac{\hbar c}{2\pi l_sk_B}\]

When  is Hagedorn temperature equal to Planck temperature? Could they be different? Following the same arguments, calculate the temperature of a gas of p branes and its Hagedorn temperature.

See you in my next blog post!




LOG#191. The hooke and the dyson.

Hello, metriplectic followers and friends! I have often written about Stoney units or Planck units in this blog. Indeed, I have a complete thread about systems of units. For Stoney and Planck units, you get some basic units of S.I. common quantities:



where I have used the normal (not reduced) Planck constant into the Planck units. It is worth mentioning that some Planck units do NOT contain the Planck constant, i.e., they are purely “classical” (if you mean absence of the quantum of action) instead of quantum. For instance, the speed of light or Planck velocity v_P=c, the Planck force

    \[F_P=\dfrac{c^4}{G_N}\approx 1.21\cdot 10^{44}N\]

the Planck power

    \[P_P=F_Pc=\dfrac{c^5}{G_N}\approx 3.63\cdot 10^{52}W\]

Since 1993, several authors, e.g., De Sabbata & Sivaram, Massa, Kostro & Lange, Gibbons, Schiller, Barrow and Gibbons, have argued about the role of Planck force as the maximal force in Nature. In parallel, other authors, pioneered by Caianiello, have also discussed the role of the maximal acceleration principle as hint of quantum gravity, both from a theoretical basis and a heuristic and quantum phenomenological viewpoint. Even more, Caianiello himself, tried to link the emergence of Quantum Mechanics as some kind of non-trivial phase space-time geometry.

In this post, I am going to make justice to Hooke, whom Newton hated, banishing Hooke portraits of the Academy. I am going to honor Hooke. I propose a new unit of force: the hooke, as the value of the maximal force:

    \[\boxed{1\;\;\mbox{hooke}\equiv 1\;\; Hk\equiv \dfrac{c^4}{4G_N}=F_M=\dfrac{F_P}{4}}\]

With this definition, 1\;hooke=0.25\mbox{Planck force}=F_P/4. Moreover, and I am sure this would not like Newton himself, in S.I. units:

    \[\boxed{1\;\;\mbox{hooke}\equiv 1\;\; Hk\approx  3.03\cdot 10^{43}newtons}\]

A hooke is equal to lots of newtons! About thirty tredecillion of newtons!

I am sure you recognize that the cosmological constant is itself some kind of Hooke law constant (inverted sign, since it is repulsive instead of attractive). De Sitter units were also candidates to name some unit to honor Hooke, but they are too conventional:

    \[M_{dS}=\dfrac{c^2\Lambda^{-1/2}}{G_N}\;\; L_{dS}=\Lambda^{-1/2}\;\;T_{dS}=c^{-1}\Lambda^{-1/2}\]

But I preferred to name the Hooke for the maximal force. It is a good idea, I think.

By the other hand, maximal power (or luminosity, in astronomical terms) can also be defined as


My second proposal is not really mine. It is inspired by Barrow and Gibbons, up to a proportionality constant. Since the usual Planck units do not care about the 1/4 (yes, I think about 1/4 in the Bekenstein-Hawking entropy as well), I think it is more appropiate to call the Dyson as the maximal power:

    \[\boxed{1\;dyson\equiv 1\;\; Dn\equiv \dfrac{P_P}{4}=P_M=\dfrac{c^5}{4G_N}}\]

This is a very useful quantity for gravitational wave astronomy, or even for neutrino astronomy. Usually, a big quantity of energy is emitted during a supernova explosion. Some physicists have introduced the unit called foe (FOE=Fifty-One-Ergs) to measure the energy released by supernovae. 1\;foe=10^{51}ergs=10^{44}joules. How many foes do you have? How many hookes do you weight? How many power do you have (in dysons)?

Speaking about the Dyson luminosity as maximal power or luminosity, it is generally assumed that, in known black hole physics:

    \[L_{BH}\leq \dfrac{c^5}{G_N}\]

The maximal power power of gravitational radiation emitted by any binary star or system with a pair of black holes, such as M_1=M_2=M, orbiting in a circular orbit with v=V\leq c is


When v=c, you get


You could complain right now, since it is greater than the proposed maximal power, but gravitational radiation is (as far as I know) a classical prediction. I think that the bound to power is meaningful (it could be, at least) as an auxiliary property of quantum gravity…Even when there is no obvious h constant? Yes, but it is hidden…Into the big G_N

So, in a natural way, classical gravitational wave radiation (or is it quantum?) is measured in dysons irrespectively the values of the masses! Maybe, even the dysons are useful units to gamma ray theories, gamma ray bursts or fast radio bursts (OK! For FRB this is not so good…). Indeed, for the first gravitational wave detected by LIGO, GW150914, the power was about 1 milidyson.

By the other hand, let me add that these types of bounds are not unique to beyond general relativity or quantum gravity ideas. Thorne’s hoop conjecture can be restated as the claim that every apparent horizon satisfies the bounds

    \[\beta_b\leq \dfrac{4\pi G_NM_{ADM}}{c^2}\]

    \[\beta_b\leq 2\pi G_N\dfrac{M_{BY}}{c^2}\]

where M_{BY} is the Brown-York mass, and also

    \[\dfrac{\beta_b}{8}\leq \dfrac{G_NM_{BY}}{c^2}\]

What about the meaning of other classical (G_N,c) units in the form Q(n)=c^n/G_N. Kostro (2010) gave the following interpretations of this quantity for n=0,1,2,3,4,5 in N=3 (3d) space:








Even when it was not discussed by Kostro, the next in the list is Q(6)=c^6/G_N. It has dimensions ML^3T^{-4}. It is yank times velocity times absement, or in terms of more common units, it is energy times acceleration! What about the extension to negative n?  It is a nice duality and dimensional analysis exercise for you! Provided you want to do something in physmatics! What about the reciprocal, by duality, of Q(n), i. e. 1/Q(n)? These quantities are:

    \[1/Q(0)=G_N=M^{-1} T^{-2} L^3\]

    \[1/Q(1)=G_N/c=M^{-1} T^{-1} L^2\]

    \[1/Q(2)=G_N/c^2=R_S/2M=M^{-1} T^{0} L=LM^{-1}\]

    \[1/Q(3)=G_N/c^3=M^{-1} T=T/M\]

    \[1/Q(4)=G_N/c^4=M^{-1} L^{-1}T^{2}\]

    \[1/Q(5)=G_N/c^5=M^{-1} L^{-2} T^3\]

    \[1/Q(6)=G_N/c^6=M^{-1} L^{-3} T^4\]

    \[1/Q(n)=G_N/c^n=M^{-1} L^{3-n} T^{n-2}\]

By the way, the Planck force is equal to the Stoney force, and it is sometimes called Kittel force. Q(2) and Q(4) are combined with the cosmological constant to obtain a density and energy density for the vacuum:


    \[\varepsilon_\Lambda=\rho_\Lambda c^2=\dfrac{\Lambda c^4}{8\pi G_N}=\dfrac{\Lambda}{2\pi}\dfrac{c^4}{4G_N}\]

Kostro also introduced the minimal quantities associated to the Hubble parameter, i.e., the cosmological constant, as follows:

    \[E_m=\hbar H=\dfrac{c^4}{G_N}\dfrac{L_P^2}{R_H}\approx 2.43\cdot 10^{-52}J\]

    \[M_m=\hbar H/c^2=2.698\cdot 10^{-69}kg\]

    \[\Theta_m=\dfrac{\hbar H_0}{k_B}\approx 1.76\cdot 10^{-29}\;K\]

    \[L_m=\dfrac{G_N}{c^4}\hbar H_0\approx 2.013\cdot 10^{-96}m<<L_P\]

    \[T_m=\dfrac{G_N}{c^5}\hbar H_0\approx 6.723\cdot 10^{-105}s<<T_P\]

    \[Q_m=Q_P=(\hbar HR_H)^{1/2}=(\hbar c)^{1/2}\]

and some quanta of gravitational actions h_G=GM^2/c read as follows



Compare this factors to the usual Planck quanta of action: c_P=\hbar c and c_P^{'}=K_C^{1/2}e.

Barrow and Gibbons, again, have tried to link the quantities


in N space dimensions, N=D-t, t being the number of time dimensions, to the following magnitudes:



The ratio between the magnetic moment \mu and the angular moment J is proposed to be bound

    \[\dfrac{\mu}{J}\leq \beta\dfrac{G^{1/2}}{c}\]

valid for N=3, and


is valid \forall N since




and the algebra is obvious. Interestingly, there are a famous number I did not know the name until recently. It is called the Zöllner number, defined as


This number is the ratio between the Coulomb law and the Newton gravitational force, at any distance (it is independent of the separation provided both laws hold in the SAME number of space-time dimensions), a pure number that measures the relative strength of the electrical force versus the gravitational force. For the electron, the Zöllner number becomes:

    \[Z=K_Ce^2/Gm_e^2\approx 3\cdot 10^{42}\]

42 is the meaning of life, isn’t it? The Larmor relation for fundamental particles


with g extra factor reads


For the electron, g\approx 2. It is not 2 exactly due to quantum corrections. For black holes, there is a conjecture, due to Schuster, Blackett and Wilson, that states:

All rotating bodies should acquire a magnetic moment given by


For electrons, it is sometimes proposed that \beta =N^{1/2}, and the black hole bound \vert Q\vert/G_N^{1/2}M\sim \mathcal{O}(1). Thus, Planck mass particles with Q\approx e should have \beta\sim \mathcal{O}(1). Moreover, an old observation by Brandon Carter: any Kerr-Newmann BH in Einstein-Maxwell theories have g=2 in \mu/J=Q/Mc in order to avoid NAKED SINGULARITIES. The proof is simple:

    \[G_NM^2\geq Q^2+\dfrac{J^2}{M^2}\]


    \[ \dfrac{c^2\mu^2}{G_NJ^2}+\dfrac{J^2}{G_NM^4}\leq 1\]

    \[\dfrac{\mu^2}{J^2}\leq \dfrac{G_N}{c^2}-\dfrac{J^2}{M^4c^2}\leq \dfrac{G_N}{c^2}\]

and thus

    \[\vert \dfrac{\mu}{J}\vert \leq \dfrac{G_N^{1/2}}{c}\]

So, \beta<1 for any Kerr-Newmann BH the Zöllner number Z=Q^2/G_NM^2\leq 1 for all flat space-time. Easy, and cool. Isn’t it?



LOG#190. Fundamental challenges (II).

The main challenge we are left to the 21st century and beyond is to unify every interaction into a single theory. Today, we have two main theories:

  • The Standard Model, for particle physics.
  • The cosmological Standard Model, or \Lambda-CDM, as well LCDM, for cosmology and astrophysics/astronomy.

The SM particles are fermions and bosons, or leptons, quarks and force carriers. Matter fields are six times two. Six quark flavors, six lepton flavors, plus their antiparticles. Force carriers are photons, Ws, Zs, gluons and the mass giver, the Higgs boson H^0. The SM does not include gravitation, so the graviton is just another story. The LCDM cosmological model says to us that atoms and particles are only up to a tiny 5% of the whole Universe. The rest, 95%, is dark mater, 27%, and dark energy 68%. The SM particles can NOT be the DM or the DE, so we are into trouble.

The Higgs field is also mysterious. It is the mass giver to fundamental particles (not to composite entities like the proton, who gets mass through the QCD), and then, any fundamental particle obtains mass with interactions through the Higgs mechanism. No one knows why the couplings of the Higgs to the particles is as we measure. The Yukawa couplings are not derived from any symmetry, as the Higgs itself is not associated to any known symmetry. Sometimes, the Higgs particle and its field are imagined through certain (not precise, I might say) analogies:

Analogy 1. The Higgs field is some kind of fluid with resistance or friction/viscosity. Issue: “particles can not be stopped”.

Analogy 2. The Higgs field is like a room with people in which a famous person enters in. Issue: “elementary particles are not just like famous people”.

Anyway, the most formidable analogy to imagine the Higgs field is to associate it to the quantum fluctuations of a sea (the vacuum). Higgs particles are just the vacuum excitations of this field, like some chunks of sea water under certain circumstances. Let me ask 7 questions answered but sometimes asked in seminars or discussions:

  1. Why 3 families or generations? Why 6 flavors or leptons and quarks? Nobody knows.
  2. Scales: why M_H, M_W, M_Z<<M_P? Nobody knows.
  3. Why dark matter and dark energy in the proportion we measure? Nobody knows.
  4. Why the cosmological constant is NOT zero? Is it the vacuum energy? If so, it should be much, much bigger according to QFT, why not? Nobody knows.
  5. Origin of the Universe: inflation. Why inflation seems to be the only explanation to the flatness problem? What caused inflation? Nobody knows.
  6. Is our Universe unique? Is there a Multiverse/Polyverse? Nobody knows.
  7. Can every fundamental interaction be unified into a single theory, the theory of everything (TOE)? What is the real TOE? Nobody knows.

Why is there something instead nothing? Well, that is another puzzle, related to the matter-antimatter ASYMMETRY we observe today. The CP asymmetry, or why there is much more matter than antimatter is a riddle with no known definitive solution, yet.

Certain symmetries forbid masses excepting some special cases. In QCD, the theory introduces the scale \Lambda_{QCD}. The SM introduces M_H=\lambda v and m_P=g_Yv for the particle masses. The rare fact is that there are many light particles compared to the Higgs or the top quark (the only anomalous massive fermion; if this means something is obscure yet). The expectation value of the Higgs is

    \[\langle 0\vert H\vert 0\rangle =v\approx 246GeV\]

Higgs mass, about 125 GeV, is unstable under quantum corrections. Since the Higgs mass is NOT the Planck mass, something must exist to make it low. Whatever it is, it is unknown. This fine-tuning is very strange. The superstring revolutions made lots of people impose supersymmetry, a highly non-trivial symmetry mixing space-time symmetries with internal symmetries, as ansatz to save the world. Supersymmetry, SUSY for short, can keep the Higgs mass light and much less than the Planck mass. However, no SUSY particles have been discovered by the LHC till now. Or in any DM detection experiments outside the LHC around the world. SUSY is just an operator changing fermions F into bosons B, as follows:

    \[Q\vert F\rangle=\vert B\rangle\;\;\;\; Q\vert B\rangle=\vert F\rangle\]

If SUSY is not “broken”, then N_B=N_F, i.e., there are the same numbers of bosons that fermions. The SM is not SUSY (however, some people could think otherwise…). The minimal supersymmetric SM is called MSSM. Its particle content is given by the next table:

I personally find disgusting the equations and complexity of the MSSM. Of course, it can be done, but the SM works fine and better without doubling the particle spectrum. There are some nightmares for particle physicists these days. The most terrible scenario is the so-called desert. The desert hypothesis says that there is no new physics between the electroweak scale and the Planck (mass-energy) scale. Well, if you believe that the neutrinos have masses (as indeed they do have), this can NOT be true. Neutrinos are massive (at least one of their species), and you have neutrino oscillations. Neutrino oscillations and the seesaw mechanism are a well-established way to give masses to neutrinos (even when you can get Dirac masses), including the exotic Majorana mass. Right-handed neutrino could be very heavy, and they could explain lots of parallel issues. The sterile (right-handed) heavy neutrino could be a scale a new physics. By the seesaw, the scale can NOT be the Planck mass. It is something between 100 TeV and 10^9 TeV. Massless neutrinos could yet exist, but at least one neutrino species is massive, and that is the biggest hint we have about the new Physics from the SM, and of course the Higgs mass or the top quark coupling to the Higgs.  The SM has another more complicated issue I have discussed before in the QCD sector. There is an interaction that could exist that is made of the dual field strength of the YM theory in SU(3). That term, the so-called \theta term is very close to zero. This is the strong CP problem. To solve it, many ideas have been proposed, the most popular being the axion or even extra dimensions.

Cosmology faces similar issues. Even when the precision Cosmology is yet to come, the current values of the cosmological parameters rise a number of questions similar to the above:

  1. What is the face of our Universe? Options: Big Freeze, Big Crunch, Big Rip, Little Rip, Vacuum Decay, …Nobody knows.
  2. Why does the cosmological constant/vacuum energy density not gravitate? Nobody knows. Put into words: M_P^4\neq \langle \rho_\Lambda\rangle.
  3. Why the observed vacuum energy density is about a few meV^4? Nobody knows.
  4. Are primordial gravitational waves out there? Nobody knows (but they are expected).
  5. Black holes have a Hawking temperature about T_H\sim 60(M/M_\odot)nK. This implies astrophysical BH are very cold, even colder than the cosmic microwave radiation! However, primordial BH are thought to exist in some models or scenarios. They could be massive, as massive as the moon. So, people are trying to search for them or argue they are the (all or a big part of it) dark matter we suspect is out there. The evaporation time of BH scales as the cube of the mass, just as follows:

    \[T_{ev}=\dfrac{5120G^2_NM^3}{\hbar c^4}\sim 10^{70}(M/M_\odot)\]

GUTs, grand unified theories, imply that there is also X bosons that make proton unstables. However, the proton is very stable. To detect a proton decay you must get a big number of protons to enlarge the odds of seeing it to decay into something. Popular theories beyond GUTs, TOEs, like superstring theory, face similar problems and even more. They predict the existence of 10^{500} different vacua. The landscape problem in string/superstring theory is that there are to many compactifications of the higher dimensional theories (10D and 11D mainly) giving a vacuum or Universe like ours. The original hope to find a single vacuum seems to have been lost in the stringy field.

A few additional comments: without the cosmological constant, it seems, there is no Universe like ours. I mean, if the energy density of vacuum were so big, the Universe had collapsed unless it does not couple to gravity. It seems unlikely. Who knows? By the other hand, u,d quarks are massive due to the Higgs, but they hadronize due to QCD. Protons and neutrons (or baryons in general) are complex objects. Indeed, the main contribution of the proton (or neutron) mass comes from non-perturbative effects in QCD. It is the highly non-trivial vacuum structure of the QCD why proton got masses. If no QCD, there is no hadrons like we know. If no Higgs bosons, atoms would not bound and form, since electrons would be massless and they would escape at the speed of light form hadrons or nuclei!

What is mass? The SM changes the paradigm. Mass (of fundamental particles) is provided via interactions with Higgs field through Higgs boson particles. Any particle X get a mass M_X=g_Y\langle v\rangle. So, we have changed the way to ask why there are masses to the question of why the Yukawa couplings or the Higgs v.e.v. are those we observe. Nobody knows the reason. Hadrons, I insist, get masses via QCD, at the effective level via interactions with pions. Protons have a mass about M=1836m_e\approx 6\pi^5m_e. Nobody knows why it works. Or why the proton mass and the neutron mass are a little bit different. Why is the electron mass 511keV? Nobody knows.

The SM lagrangian is a complex lagrangian. It is very hard to explain it here, but I did it in a thread. The Higgs potential seems to be much more simple,…It seems to be:

    \[V(\psi)=-M_H^2H^2+\lambda H^4\]

Historial remark: if you think that the electron was discovered in 1897, the top quark in 1995, and the Higgs boson in 2012, you can realize particle physics is slow (just like the superheavy elements searches or even worst!). As a summary, we could say the Higgs field and the SM physics are fundamental but they face many unsolved issues. The Higgs field can be imagined, approximately  only, as waves in a viscous fluid.

More questions:

  1. Is the found Higgs particle the SM Higgs particle or an impostor? Nobody knows, but it is very close to be a SM Higgs.
  2. Why the Higgs couplings are so different? Nobody knows, even the stringers face a formidable question to answer this, because they have to explain why to select one solution over the 10^{500} possible vacua when they reduce the theory to 4D.
  3. When particles interact with the Higgs, the coupling between the particles and the Higgs is a Yukawa coupling. Yukawa couplings do not come from any known symmetry in the SM. The Higgs vacua is unstable or metastable. You need something to get a light Higgs mass. Higgs mass is not protected by any symmetry in the SM, therefore something has to exist to keep Higgs bosons light. It could be SUSY. NO SUSY particle has been discovered so far. Nobody know if SUSY is the answer to the lightness of the Higgs boson. We don’t even know if the Higgs boson is elementary or composite by some new super-strong force, as suggested by Terazawa. Yukawa couplings are introduced by hand.
  4. New physics beyond the SM (BSM) must exist. Several hints from observations are known. Alternatives: extra dimensions, compositeness, SUSY, axions,…have not been excluded. Evidences of BSM are: neutrino physics (at least one neutrino species is massive), dark matter, dark energy,… The issue of the Higgs stability is serious, since if it is not stable (only metastable or unstable), the Universe could tunnel to the true vacuum in any moment, but fortunately, it does not happen. It will not happen in the near future, but it is important to know about this vacuum decay alternative better. When t_U>t_D, i.e., when the lifetime of the Universe be greater than the vacuum decay time, everything will change into another vacuum or phase. The existence or not of SUSY could help into the stability of the Higgs vacuum. Perhaps, the SM is valid until the Planck scale with only minor modifications, like asymptotic safety and right-handed neutrinos. It would be terrible for particle physicists but it is another option…

What is the simplest way to mathematically understand the Higgs mechanism? A matter field hast a wavefunction \Psi. Physical symmetries act like \Psi'=q\Psi. Mass terms are not invariant under this symmetry, since

    \[m\Psi^2\rightarrow q^2m\Psi^2\]

The simplest solution to keep invariance is to add a new field, the Higgs field, to compensate the extra factor. Suppose you transform \Psi\rightarrow \Psi q, and the Higgs field as H\rightarrow H/q^2, then you observe you keep invariance since

    \[H\Psi^2\rightarrow \dfrac{H}{q^2}q^2\Psi^2=H\Psi=invariant\]

However, the mass is not yet the physical mass, since it is not interaction mass. The final trick is to excite the Higgs field a little bit, writing

    \[H=\langle H\rangle+h\]


    \[H\Psi^2=\langle H\rangle \Psi^2+h\Psi^2\]

where the interacting mass is \langle H\rangle=gv and the interacting term is made of three fields. The Higgs boson is a property of fluctuating vacua. It should respete P,C,T symmetries. The Higgs field and the Higgs mass is the only mass without explanation in the SM, because the Higgs must is put by hand. The SM gives no specific mass to the Higgs mass, but it provided some consistency bounds (by unitarity, triviality and previous precision measurements, we did some guesses). Mass is important, since due to the equivalence principle, there is only a mass (not three!). Inertial mass equals gravitational mass (or gravitational charge) with high accuracy. In order to find why there is no antimatter (or very few amount of it) you have to invoke the Sakharov conditions: violation of CP (T), violation of the baryonic number (\Delta B\neq 0), and out of equilibrium physics. The 3 conditions seem to be satisfied in the context of the SM, at least from the phenomenological viewpoint.

Why the Higgs boson lifetime is about \tau\sim 2\cdot 10^{-22}s? Why its decay width is about \Gamma\sim 4\cdot 10^{-3}GeV/c^2? Why M_H=125-126GeV? For the given observed higgs boson mass, the most likely decay products are b\overline{b}, W^+W^-, gg, \tau\overline{\tau},c\overline{c}, ZZ, \gamma\gamma and Z\gamma. Processes of Higgs bosons into 4 leptons have been observed to identify the Higgs particle.

See you in another blog post and thread!

LOG#189. Fundamental challenges.

The marriage between gravity and quantum mechanics is “complicated”. The best physicists and brightest minds have tried, but only with partial success. String/superstring theory, now M-theory, is a curious story. The another story is canonical quantum gravity, or loop quantum gravity in its more modern formulation. Let me go back-wards in time.

During the 20th century, we have created the two greatest theories and frameworks of the Human Science. They have names you know very well. Quantum Mechanics (that crazy theory) and relativity (oh, Einstein’s baby! Isn’t it?).

Quantum Mechanics is the microscopic theory of matter (molecules, atoms, nuclei, elementary particles) and energy. Despite what you have heard, there is no problem with making Quantum Mechanics a relativistic theory. It is called Quantum Field Theory (QFT). The Standard Model is a QFT covering every subatomic (till know) force (electroweak and strong), but it does NOT cover gravity.

Relativity, and here I mean GENERAL relativity, is the simplest theory that is consistent locally with both, special relativity and the equivalence principle. The equivalence principle, as Sheldon tries to explain to Penny in certain The Big Bang theory episode, states that there is only a notion of mass. Gravitational masses and inertial masses are the same. Well, indeed, there are 3 notions of mass. 2 gravitational, and one inertial. Anyway, it shows that, to an incredible precision, these masses are the same, with an accuracy of 1 part in 10^{12-14}. So, for all practical purposes, and for the common experiments or experiences we have, they are equal. Let me remember you the 3 masses:

  1. Active gravitational mass. \int g\cdot dS=-4\pi G_N M.
  2. Passive gravitational mass. P=Mg.
  3. Inertial mass. F=Ma.

I have assumed, but the result is completely general, that D=4.  This theory of general relativity is only built on the general covariance principle and the equivalence principle (some sort of Mach principle contextualized in the 20th theory…). General relativity provides Newton theory as an approximation, but it also includes new effects and an infinite set (in principle) of perturbative corrections. It also explains gravity at large or very large scales (well, indeed, excepting the dark matter or dark energy, that are just plugged by hand into the equations, with no explanation, ad hoc). General relativity explains and describes the expansion of the Universe, predicts the existence of black holes, and gravitational waves (note that there is NO gravitational waves in newtonian gravity!).

Hard marriages use to accomplish big challenges. The biggest of the challenges is that the theory predicts itself its own fall-down. The fall-down events are indeed the structure and dynamics of black holes. It makes us to ask: what is gravity? What is gravitational quantum mechanics? What is, at last, quantum mechanics?

Gravity, the first fundamental force we discovered. It was Newton genius (and likely also the hidden role of Hooke, the most hated person by Newton, defenestrated) who discovered gravity, but some people discuss from time to time if it was discovered by Hooke…Anyway, returning to our theme, Newton (and/or Hooke) managed to unify terretrial mechanics with celestial mechanics. And it yields Universal Gravitation. In fact, there were suspicions by Galileo (the first modern scientist), supported by the Kepler and Copernicus data and observations, that the geocentric view of the Universe was unsustainable. Likely, even some Greek people like Hypparcus also knew it a part of all this stuff.

Gravity is an engine! The simplest way to realize it by Galileo: take any inclined plane. An inclined plane is a machine that transforms height H into energy. In fact E\propto v^2\propto H. You can do it yourself. Pick any round object and an inclined plane. Leave the object at certain H and observe what happens. That was done by Galileo hundreds of times! The coolest thing about this observation is that the fuel it uses is…Gravity. Technicaly, as you learn from High School:



Then, you get v=\sqrt{2gH}, and \dfrac{1}{2}mv^2=mgH. Have you ever imagined that inclined planes were engines fueled by gravity? It is something beautiful. These arguments produce a conceptual revolution in philosophy. And a conceptual revolution in physics, due to the introduction of the notion of energy by Leibniz (another Newton’s foe, but this time, protected such as he managed to teach you differentials and calculus from a more intuitive way than that fluxion calculus by Newton). Descartes, trying to make energy more geometrical, introduced the notion of linear momentum (p=mv). In fact, Leibniz’s monads have a secret link with his findings in physics and mathematics (physmatics, yeah). Monads are equivalent to energy. Monads are the dynamical atoms of the Universe in those times. As a consequence of this thing, the same any inclined explains, it allows us to get a clock. As the YM^2=Gravity says, double the bet. Take 2 inclined planes. You can make any object to “oscillate” between 2 arbitrary inclined planes, if you do it properly. Furthermore, it turns out that the own solar system, as the ancient ones taught us (but using religion or myths, not math, not physics or physmatics), any pair of celestial bodies can be used as CLOCKS. Using the universal law of gravity, and the notion of centripetal force, you obtain


And from this equation, you get the Kepler 3rd law, relating period (time, clock, tick tack, tick tack,…) with distance between two astronomical objects:


I am assuming the space-time is 4D, so the space is 3d. Using the same trick, but with atoms instead of gravitating objects, you see that the same is also true in atomic physics. Any pair of bodies also act as clocks in motion. Since Aristotle times, celestial bodies have served us (well, indeed from much more ancient times), eternal clocks. However, as we do know not, they are not eternal, only they life much longer than a human life. Even more, the whole Universe is a machine made by gravity, taking the machine as an equation


From this viewpoint, the force F_N is associated to motion. The gravitational constant G_N is the universal conversion factor, the energy or fuel is mass (energy in relativity) M_1M_2, and the height is just R_{12}, similarly to the inclined plane. Every celestial body falls down. Even the Doctor falls down, despite he regenerates. The fascinating thing with G_N is that it is not a pure number, it has dimensions. It is not like the Reynolds number in fluid theory, or \pi in mathematics. It is similar to other constant, G_F, the Fermi constant for weak interactions. Fermi constant and the weak interaction explains radioactive decay. Due to weak interactions, some particles are unstable and not eternal, e.g., the neutron. Some grand unification theories (GUTs) predice that even protons do decay (but they hay a very long lifetime). Mathematically speaking


and G_N has units of m^3s^{-2}kg^{-1}=Nm^2/kg^2. In general relativity, we have something just a bit better. As we merge space and time into space-time, if we do not distinguish length and time, we must introduce a new scale or conversion factor. It is the speed of light c=299792458m/s. In fact, c=LT^{-1}. Physical dimensions are categories (category theory fans reading me just know?) that matter into physics. They quantities. Extra example: the fine structure constant is a pure number, created in this way

    \[\alpha_{em}=\dfrac{K_Ce^2}{\hbar c}\approx \dfrac{1}{137}\]

With units such as c=\hbar=K_C=1 you obtain \alpha \cdot 137=e^2.

What is a black hole (BH)? Well, the details are NOT fully now. But there is some ways to understand a BH withe the fashion I have explained here. A black hole is something that traps light. A black hole does not let light to escape. BH are interesting because light is interesting. So, well, what is space-time? What is light? Light are waves or quanta of electromagnetic fields. You are reading me through the screen of your computer or e-device. The device is at certain distance d from your eyes. You are seeing me in the past (OK, only after some minutes since I published and a few fractions of a second. If your eyes are about a meter from the screen, the light uses about 3 nanoseconds to arrive to your eyes and some negligible time to be process in your brain. See any portrait of Nature, a nice girl/boy means that the light travels from them towards us. But, since the speed of light IS finite, it requires some time t_L=d/c. That is common knowledge, and it is used by astronomers, astrophysicists, and collider physicists from all over the world. See the stars (the sun or any other) means to see into our past, or the past of the Universe. You have a time machine every night. For visionaries, a question: would you look forward and away because you could travel with v>c violating special relativity in 3+1 space-time? Hint: multi-temporal relativity and other forgotten relativities (I promise, I will post about them in the future) can go beyond the speed of light, with a price.

Telescopes are real time-machines. The night-sky is a wonderful cosmic TV! It is a screen. Not just like a cinema or theater, but also a nice screen. Space-time is a network. It is just like a continuum limit of a lattice made up with light rays. To understand the space-time IS to interchange concepts like length L or time T in a dual way, using the speed of light. In the same way, it shows than mass and energy are the same thing as well, only a conversion factor is necessary. And as you know, E=Mc^2, don’t you? Moreover, as any physical velocity of the universe, excepting massless force carriers, is less than c, force carriers travel to the speed of light if they are massless and gravity alters not only matter, but also to light rays since they carry energy E=\hbar \omega. Now, we have two constants G_N, c to play with. So,

    \[\dfrac{L^3}{T^2}=L\dfrac{L^2}{T^2}=L c^2\]

Therefore, using the definition of G_N and its dimensions, we can form a gravitational length associated to any relativistic object. Up to a pure number, it reads


Essentially, experts might say, it is half of the Schwarzschild radius. Note that if the speed of light were infinite, the gravitational length would be ZERO. In reality, the Schwarzschild radius is about


and the gravitational length defined above is


This characteristic length does exist for ANY object in the Universe. Even for you. Neglect any other interaction, and plug a mass there. For instant, for the sun is 1.5 km, for Earth is about 4 mm (the Schwarzschild radii of these objects are 3 km and 8 mm, aproximately). In real life you get a short “life”. Gravitational lifetime is much shorter. The hidden reason you don’t notice your gravitational length is that real life scales are much bigger than gravity scales. However, if you were ablo to turn-off any interaction excepting gravity, your length would be L_G. Did Marvel know this? Did Ant-Man know? We have arrived to a surprising new result. Black holes are what is left if you turn-off everything excepting gravity. BH have only mass (or some charges as I told you in the previous post). Of course, this is the no-hair theorem, that could be wrong, or softly wrong, with the recent new researches.

Stage two is Quantum Mechanics. Classical Mechanics is a sort of code or tool that allow us the identify between what is possible it happens, and what it do happens. During the 18th and 19th century, analytical mechanics introduces a new object into physics. It is called the action. Action is usually denoted by S (please do not confuse it the entropy S, another S). Action allows derive the Newton laws from other principle, much more general. It is called the minimal action principle. It is an analogue to the maximal entropy (Max-Ent) principle in thermodynamics. Then, what is action? S is not for Superman, hehehe. Geeknerd quote: “It is not an S. In (our) my planet it means hope (quantization!)”. Dimensions of action are:

    \[S=MT\cdot c^2=ML^2T^{-1}\]

Conceptually, action is a simple thing. Any action is the product of the energy you use in some process by the time the process lasts. Using space-time dimensions, action is:


that is, action IS linear momentum times spatial size or a type of angular momentum times a phase (adimensional phases are). The quantum revolution is born. There exists a MINIMAL VALUE for the action. That is named quantum of action. That is \hbar or h. Classical physics is just the theory in which the minimal action is ZERO, or just, you take the limit in which the quantum of action is zero. Classically, there is nothing that avoids you a finite variation of energy in a instant (\Delta t=0). Quantum Mechanics is different. The new axiom or postulate (physical or mathematic) is that you get a minimal quantum of action h or \hbar=h/2\pi\approx 1.1\cdot 10^{-34}J\cdot s. This happened in 1900. Max Planck had to do it in order to explain the blackbody radiation. Thus, energy quantization is not truly fundamental. Action QUANTIZATION is MORE FUNDAMENTAL. To any minimal action, you get a new quantum length scale. It is the Compton or Quantum wavelength


Old atomists believed that atoms were static, and that mass was a sum of invariable atoms. The new atomism, granted by Quantum Mechanics (and its statistical interpretation), states that atoms are dynamical. This was highlighted by L. Boltzmann, but he was bullied with terrible results for his own life.

That action is a sum of quanta of action is something you could remember from school, if you learned about the Bohr-Sommerfeld correction to the Bohr atom. That quanta of action are what matters, drives you into a deep question about the localization of energy. After all, quantum length is L_Q\propto 1/M, and it is independent of G_N. Einstein, in 1905, envisioned how to apply the localization of photons with light quanta to explain the otherwise paradoxical photoelectric effect. Photoelectric effect is quantum energy being localized. Now, we have three scales for any object. Its own normal size L, the quantum size L_Q, and its gravitational size L_G. L>>L_G>>L_Q in general. For instance, any electron has L_Q>>L_G>>L. Therefore, the electron IS quantum. Real life for humans is generally classical L. How many atoms of action you have for these objects? Imagine some object, like a house. It has a mass M. Houses has a size L. The number of quanta of action of houses you get is, for normal houses:


If the house shows to be gravitational, the number of gravitational houses (boxes) are

    \[N_G=\dfrac{ML_G}{\hbar} c\]

and if the house shows to be quantum


The ideal house, or box, with zero size does not exist in our Universe. If you do the same for the electron, the platonic electron has normal size non-null (you have ensembles of electrons in atoms!), gravitational electrons do not exist (to our knowledge, L_G<1), and for quanta of electrons you easily get N_Q=1. Of course, the deep question is why elephants, cars, houses, humans or entities are NOT electrons. Quantum fluctuations are minimal changes of action, i.e., N\rightarrow N+1 or N\rightarrow N-1, but you can get others exciting more quanta. The macroscopic world, for elephants, cars, houses, humans, or entities is a world where N\sim N+1. When N\neq N+1 you get the QM weirdo. Quantum particles, like the electron. Thus, what is Quantum Gravity? There is no a unique answer. Experts differ about it. According to these lines, since gravity introduce L_G and atoms of action N_G depending on mass in a universal way, quantum gravity must be a theory with quanta of action for certain quanta of action, these quanta of action are UNKNOWN yet, but they are provided in an invariant way by the equations

    \[N_G=\dfrac{ML_G}{\hbar}c=\dfrac{ G_NM^2}{\hbar c}=\dfrac{G_NE^2}{\hbar c^5}=\left(\dfrac{E}{E_P}\right)^2\]

where the Planck energy is about 10^{19}GeV=10^{16}TeV. Thus, quantum gravity introduces a fundamental new number into physics. The minimal number of quantu of action for gravity N_G. It is not 1 but N_G(M). It depends on G_N, \hbar, c. When gravity is turned on, we have to push up the energy. For classical physics, this new quanta of action are zero, for quantum physics is about 1, but for quantum gravity you have to lift it up. To get this easily, you can see yourself, than when you put L_Q=L_G you get the Planck mass, or equivalently, you get the Planck length. If you do it, you will convince yourself of all this thing. Additionally, if you introduce the vacuum energy density as being not-null, you introduce a new quantity, the cosmological constant. It introduces you new length scales (how many scales are there after all?). The idea is that even the vacuum has a “minimal mass” or energy. One option is to use the scale:


Note that G_N is not there, so it is not gravitational at all! Also, you can introduce the scale

    \[R_\Lambda=\sqrt{\dfrac{3}{\Lambda}}=R_U\approx 10^{26}m\]

Compare it to the Planck length, 10^{-35}m. The classical radius of the electron is about 10^{-15}m, similar to the nuclear size. From M_W you can get a scale R_W, and take, e.g., R=(R_P^2R_W)^{1/3}\sim 1 fm, where R_P=L_P is the Planck length, and you get just like the nuclear size again. The remaining scale with gravity is


The radii or length scales:


    \[R_W^{'}=\dfrac{\hbar c}{G_N}\sqrt{\dfrac{\Lambda}{3}}=L_P^2/L_\Lambda\]


represents different systems. The quantity g=c^3/G\hbar\Lambda\sim 10^{-120}-10^{-122} is really tiny. Note that M_WM_W^{'}=M_P^2, R_WR_W^{'}=R_P^2 and that there are also a duality of M_W alone with respect to M_P. How many mass scales are there, then? How many quanta of action? The cosmological quanta of action is very mysterious. You could find

    \[n_\Lambda=\dfrac{M_\Lambda}{M_W}=N^{1/4}\sim 10^{30}\]


    \[N_\Lambda=\dfrac{M_W^{'}}{M_\Lambda}=N^{3/4}\sim 10^{90}\]

Remark: R_W^{'}<<L_P. Thus, for elementary particles in quantum mechanics one should have M_\Lambda<M<M_P, for QM holes (?) one could guess

M_P<M<M_\Lambda^{'} one could expect classical black holes. Let me put the five scales of mass into numbers:

1st. M_W=\dfrac{\hbar}{c}\sqrt{\dfrac{\Lambda}{3}}\approx 10^{-67}g\approx 6\cdot 10^{-35} eV/c^2

2nd. M_\Lambda=\sqrt{M_PM_W}\approx 10^{-35} g\approx 6 meV/c^2.

3rd.M_P=\sqrt{\hbar c/G_N}\approx 10^{-5}g=6\cdot 10^{27} eV/c^2=6\cdot 10^{6} ZeV/c^2.

4th. M_\Lambda^{'}=M_P^2/M_\Lambda\approx 10^{25}g\approx 6\cdot 10^{57}  eV/c^2.

5th. M_W^{'}=M_P^2/M_W\approx 10^{56}g\approx 6\cdot 10^{88} eV/c^2.

There are two additional mass scales, you can make:

6th. M_T=\left(\dfrac{\hbar^2\sqrt{\Lambda}}{G_N}\right)^{1/3}\approx 10^{-28}kg\approx 60 MeV/c^2. Note that there is no speed of light here!

7th. M_T^{'}=c\left(\dfrac{\hbar}{G_N^2\sqrt{\Lambda}}\right)^{1/3}\approx 10^{12}kg\approx 6\cdot 10^{47} eV/c^2.

As final ending, let me point out something I missed in previous posts. The Beck deduction of \Lambda with quantum information theory uses the formula

    \[\Lambda=\dfrac{m_e^6G_N^2}{\alpha^6 \hbar^4}\]

It has been deduced as well by Harko, using diffent arguments, to provide

    \[\Lambda=\dfrac{L_p^4}{r_e^6}=\dfrac{\hbar^2 G_N^2m_e^2c^6}{e^{12}}\]

and it gives, as I mentioned, the observed value \Lambda\sim 10^{-56}cm^{-2}. Of course, if this number and formula is correct or just a numerological coincidence, must be elucidated yet.

In summary, quantum mechanics and general relativity ARE important because:

  1. Quantum mechanics introduces quanta of action. It is not only that energy is quantized. Everything is, at the end, quantized. Quantum mechanics is the theory to consider if you want to describe something light and small.
  2. General relativity introduces space-time and G_N, c^2 on equal footing. It is the theory you need in order to describe something BIG and MASSIVE.
  3. Quantum gravity is the theory you need to know how to describe particles that are both, small and very massive. No one had complete success. Maybe, quantum mechanics and general relativity as they are formulated have to be modified. Are QM and GR effective theories at the end? Is space-time (or the quantum of action) emergent?

Some (right or not, who knows?) hints:

  1. The measurement problem.
  2. QFT is not (to my knowledge) affected by maximal acceleration/force, A=Mc^3/\hbar=Ec/\hbar. Maximal or critical acceleration plus newtonian gravity gives you Planck length (check it yourself!).
  3. Quantum rest is impossible due to the HUP or GUP.
  4. Quantum fluctuations of vacuum implies a fluctuating non-null vacuum energy density. QFT gives a wrong result…By 122 orders of magnitude. SUSY does not solves it all…
  5. Electrons can not be extremal black holes, or can they? The Schwarzschild radius of any electron is about 10^{-57}m. The RN radius for an electron R_Q^2=K_CG_Ne^2/c^4 is about R_Q\sim 10^{-36}m<L_P.
  6. Einstein idea of elementary particles as singularities can not be correct. Can they? The issue of geodesic motion in general relativity of such a particle is crucial.
  7. Primordial black holes (PMB), i.e., black holes created in the early Universe not as result of the remaining hypermassive stars but as result of the fluctuations in density due to inflation or post-inflation could be the dark matter we are searching for. The PMB window is essential in the range of mass 10^{14}-10^{23}kg.