## LOG#236. Exoplanets.

The Nobel Prize in Physics this year, 2019, was awared to Jim Peebles, Michel Mayor, Didier Queloz. The latter two men devoted their careers to the search of extrasolar planets, or exoplanets, for short. That is, the search of worlds around other stars where life like us, or even non like us, could develop. Astronomy has the tools now to fulfill the Giordano Bruno prophecy. There are billions of worlds out there. There is no heresy. Even when the first exoplanets were found around pulsars in 1992, the work of Mayor and Queloz provided the evidence we dreamed long ago that there are worlds out there. Wherever you see at the sky during the night, there is likely a single exoplanet per star as minimum in our galaxy. Of course, there are stars without planets, stars with several planets, even planets without stars (out from their orbits or errant in space).

This short post is to introduce you to the Science of exoplanets in simple way. I am not going too deep now, since it would require more time I don’t have now (even when I wish I had it!). Beyond this new category post, you, of course, will ask how we detect explanets. Well, there are different methods, the first one allowed Mayor and Queloz to find 51 Peg b and, after 24 years, win a Nobel Prize…Science is slow so many times…

1st. Radial velocity. At the Nobel conference, Ulf Danielson review this method. The fact that a single star has a planet orbiting around it produces a wobble in the motion of the star. The star emits light and light has an spectrum. The wobbling star shifts its spectrum due to the pull of the hidden exoplanet, so you measure a shifted spectrum. The consequence is that the radial velocity we measure from the star realizes an harmonic motion! That is, due to the exoplanet presence, the radial velocity varies periodically in time. The measurement of this radial velocity requires the measurement of period , it is done with a time series (the repeated measurement of the velocity of the star in time), the orbital radius of the circular motion (we suppose a circular motion for simplicity, but in general, there is an eccentricity), and the mass of the system . Generally speaking, we do not know the mass of the exoplanet with radial velocity, but we can get a bound of its mass, since we usually do not know the orbital inclination with this method. Were the inclination known, you could derive the exoplanet mass. Generally, other type of observations are required to measure , the orbital inclination, but it can be done. Even other methods provide a tool to derive the radius of the exoplanet (see below the transit method). Mathematically,

(1)

where is the gravitational constant, are the star and the exoplanet mass, the orbital inclination, and the orbital radius (generally the major semi-axis). Given the Kepler third law , and , then you can transform the above equation into

(2)

The radial velocity method is useful for telescope searches, with spectrometer, and it is mainly a tool for finding large exoplanets, and close to the observer. This method is not the most popular today, as technology can now do it better with other methods, but yet it is useful to find some exoplanets or like a check or additional independent method to confirm the existence of some planet is some exosystems.

2nd. Transit method. Also known as transit photometry method. If it uses the measure of the total flux spectrum from a star. During the transit of any exoplanet through the vision line between the star and us, the light amount we receive will decrease like in our solar eclipses. The measurement of the flux variations

and where is the flux without eclipse, and is the flux during the transiting exoplanet and the eclipse. The relative variation of the flux is proportional to the relative size of the exoplanet radius and the star radius, squared.  This method can produce false positives due to stellar activity or other bodies mimicking exoplanet transits. This method allows you to know , the period, the orbital radius and the exoplanet size, but there are glares with , and yet measuring is hard (not now as it were in the past though) and the final test is the capability to measure .

3rd. Additional methods. Current technology in astronomy has even allowed and granted with more detection methods:

• Direct imaging. Even when far, far away, we managed to directly observe some exoplanets. Of course, not with the resolution of solar system planet images, but it can be done and it will be done in the near future and beyond. Coronography (and coronographs) is part of the future tools that will allow us to see another worlds, and, perhaps, to detect life on their surfaces (even ETI).
• Timing. It is used with pulsars (like the first exoplanet detections in 1992 around pulsar stars), binaries and multiplanetary systems as additional method.
• Gravitational microlensing. The future searches will benefit from this technique based on general relativity. For large distances and large , even when it requires single trial alignment with the source, it could potentially detect exoplanets and measure their masses. So, Einstein even contributed to the science of exoplanets! Doesn’t he?
• Space gravitational wave detectors. Recently, it was even proposed that space-born gravitational wave telescopes/interferometers could even detect exoplanets under certain circumstances (specially around white dwarfs). So, it yields an additional motivation to build up space gravitational observatories!

Dyson suggested than citizens could try to search for exoplanets in low mass stars like M-stars or WD-stars. There, transits are easier to see, even when it likely requires good telescopes and lenses to see the transit around faint stars.

All hail Helvetios and Dimidium, 51 Peg b star and exoplanet, formerly known as Bellerophon.

See you in another blog post!

## LOG#235. Hyperballs.

Hi, everyone. The saddest thing about job (working with teenagers and other people) is that it delays other stuff, like blogging! So, you should be patient about getting more of “my stuff”.

What is going on today? Hyperballs. Or hyperspheres and some cool variations. I have written about hyperspheres here before…Even provided formulae for volume (and area) from the. You know, from higher dimensions, as you can get hyperparalletope or cross-polytope with hypervolume , for the known sphere you get and the more general formula for the d hypersphere or usual hyperball volume

(1)

where and . The hyperarea can be obtained from the recurrence in d from lowering dimensions and a useful derivative gadget tool:

(2)

For any 3d ellipsoid you also can derive that, given : , and similar formulae in higher dimensional hyperellipsoids, following the pattern , where is the hypervolume of the unit hypersphere and are the hyperellipsoid (HY) semiaxes.

By the other hand, you do not need to keep things so simple, you can even change the norm in . Thus, having a vector in with norm

then the so-called p-normed hyperball volume in d follows:

In particular, you get y , and those match the expressions for the cross-polytope and the n-cube. Other possible generalization is the next one. For any real positive numbers you can even define the balls:

Since Dirichlet times, mathematicians know the general formula for these hyperballs/hyperspheres:

Enough balls today? Not yet! I wish! I am showing you in a moment why calculus rocks. And not a usual calculus indeed only. Fractional calculus is a variation from common calculus where you can get non-integer derivatives, even irrational, complex or more complicated definitions! Before that, let me remember you as caution that . And know define the Riemann-Liouville operator (fractional derivative):

(3)

Take now . Wow. Then,

(4)

and then, you obtain the partial result

Now, insert with and , then

so, you finally deduce that

(5)

or equivalently

(6)

The fractional recurrence

with holds and note that the general Riemann-Liouville fractional derivative

has gaps or poles, in principle, at values since functions have singularities at negative integers, including zero.

See you in another wonderful blog post!

## LOG#234. Quilibrium theory?

Quantum Mechanics, is it unbreakable? Is it effective or fundamental? Could it be an approximation to another theory? We do not know the ultimate word about that. However, up to current date (circa 2019), it is fundamental. Any trial and experiement to go beyond quantum mechanics or any clever experiment done to crack it have failed. Quantum theory has remained essentially in the same framework during the last 70 years. It has accomplished too much, so, even if you find out any theory going beyond quantum mechanics, you have to reproduce it as certain limit. No way to keep its successful experimental tests. Perhaps, we should accept it as it is. the greatest minds of all time found it hard yet. Should we?

When compared to General Relativity, a theory that can be thought full of non-equilibrium states and equilibrium states, Quantum Mechanics (QM), according to the Born rule should be thought as an equilibrium probability distribution. This is the idea and thought of A. Valentini, a researcher who has spent quite of his research as scientist devoted to find out a generalization of QM. I am not if he is right, and I am afraid he is not, but anyway his ideas are worthy a blog post.

Let me first point out and old mantra about classical preconceptions of atoms. Classical fields predict that accelerated charges do radiate electromagnetic waves. The Larmor formula gives you the powert or energy loss due to that emission:

(1)

A lengthy calculation allows you to obtain the decay time of any electron (or generally any charged particle) into an atom (via an integral):

(2)

For the hydrogen time, that time is about 10 picoseconds. So atoms are classically unstable. QM solves the classical instability thanks to a radical new set of rules that make charged particles to be able of orbiting the nuclei in a sensitive way compatible with experimental expectations. However, QM is what we call a local (sometimes non-local!) theory based on the idea that particles are really “probability waves”. Observers collapse the wave states when measuring any quantum observable. This uncertainty, while unobserved, is the keypoint of QM that otherwise is a deterministic theory that allows you evolve quantum wave functions unitarily.

Antoni Valentini suggests that QM is really a particular simplification of a much more general theory based on non-equilibrium distributions that do not satisfy . This is a radical idea, since I do not know how to get probability conservation from it, but he states that a relaxation and unrelaxation of the Universe provides the approximation where, we know, QM is true. But out of it, there is non-equilibrium.

Let me first introduce the basic ideas of de Broglie’s pilot-wawe (later adopted by Bohm). Wave functions are defined

with

for any particle . Then, Valentini argues,

implies QM iff . The equilibrium (quilibrium) theory is vindicated wherever

you get

as the equilibrium equivalent of

The thing is that, well, there is no beyon the speed of light in QM because of the equilibrium condition . You can not beat the Heisenberg Uncertainty Principle in QM and thus, generally speaking, you can not get superluminal stuff (at least macroscopically and large times, excepting multitime theories). Whenever you plug you will obtain an equilibrium distribution in the framework of QM. For all time due to the unitary evolution of the hamiltonian. And the same same ensemble, with the same , can have however different in the Bohm-deBroglie approach. However, a non-quantum non-equilibrium theory, would violate that. Non-local quantum theories in non-equilibrium (non quilibrium theories) could be the reason behind the weird phenomenon known as entanglement. In non-local non-equilibrium theories entonaglement would be caused by position dependent . How to test this idea is complicated. Moreover, QM contains entanglement in a natural way, even if we do not understand the origin of entanglement yet… However, Valentini suggests to seek out quantum noise in the Big Bang relic radiations and backgrounds. Non-equilibrium theory would hint as power deficits at long wavelengths in the power spectrum! Non-equilibrium theory could imply a pre-inflationary phase in the very very early (planckian?) Universe. And it could imprint signals and the posterior inflationary phase, and influence the way in which the CMB decoupled. However, no plain approach and concrete prediction of what signals are to my knowledge.

Other concerning issue is related to the fact the space (spacetime) is expanding. Every part of the Universe can be seen as a single system inside a higher dimensional space. Non-equilibrium theory and its particles could be used to beat the HUP and send particles faster than the speed of light (in the absence of multiple times). Also, non-equilibrium particles would reestablish the absolute time, so I find it hard to conciliate that idea with a standard theory of relativity. Why do we stack in equilibrium though? Why Born rule is true? Is there really an acceptable non-equilibrium theory?

Everything we know about the Universe comes from QM or its current incarnation, the Standard Model and the cosmological standard model (LCDM), that also uses QM in a subtle way. If there is a subquantum/ultraquantum or non-equilibrium substrate of what we know as reality it will be highly non-trivial, and very hard to test (even not even wrong). QM assumes that particles are also waves. Waves do experiment different types of phenomena:

• Superposition.
• Linearity and non-linearity.
• Dimension and dimensionality (generally integer, but fractional waves do exist). Fractals and multifractal waves could be considered.
• Dispersion.
• Diffraction.
• Refraction.
• Reflection.
• Polarization (transversal waves only).
• Modulation.
• Resonance.
• Coherence.
• Interference.
• Diffusion.
• Attenuation, friction or damping.
• Forcing.
• Reverberation.

Some words of reverberation, dedicated to S. Hossenfelder.  Reverberation is, for sound, the persistence of sound after the sound is produced. We could say our Universe made of quantum fields/waves is the reverberation of something that existed before space, before time, whatever they were. Perhaps we are wrong, and spacetime are forever. But the Hawking radiation, if true, shows that even spacetime decays…Sadly, we are doomed if spacetime is not eternal. About reverberation, again, the wikipedia says, that there is a typical measure of it, called reverberation time:

“(…)Reverberation time is a measure of the time required for the sound to “fade away” in an enclosed area after the source of the sound has stopped.”When it comes to accurately measuring reverberation time with a meter, the term T60 (an abbreviation for Reverberation Time 60dB) is used. T60 provides an objective reverberation time measurement. It is defined as the time it takes for the sound pressure level to reduce by 60 dB, measured after the generated test signal is abruptly ended(…)”.

There are about  5 types or reverberation, dubbed room, chamber, hall, cathedral, plate and shimmer. Also, there are some paremeters in reverberation, called early reflections, reverb time, size, density, diffusion and pre-delay. For the reverb time, there is a semiempirical equation, due to Sabine (not S. Hossenfelder, sorry Sabine,;)):

(3)

There, Sabine (he not she) established a relationship between the of a room, its volume, and its total absorption (in sabins). This is given by the above equation, and where is the speed of sound in the room (for 20 degrees Celsius), is the volume of the room in cubic meters, is the total surface area of room in meters squared, and   is the average absorption coefficient of room surfaces, while the product is the total absorption in sabins. What is a sabin????? Well, let me define sabins:

Definition 1 (Sabin). A sabin is a unit of (sound) absorption. Sabins may be defined with either imperial or metric units. One square foot of 100% absorbing material has a value of one imperial sabin, and one square metre of 100% absorbing material has a value of one metric sabin.

The total absorption in metric sabins for a room containing many types of surface is given by:

where are the areas of the surfaces in the room (in ), and are the absorption coefficients of the surfaces. However, the total absorption in sabins (and hence reverberation time) generally changes depending on frequency (which is defined by the acoustic properties of the space). The Sabine equation does not take into account room shape or losses from the sound traveling through the air (important in larger spaces). Most rooms absorb less sound energy in the lower frequency ranges resulting in longer reverb times at lower frequencies. Sabine concluded that the reverberation time depends upon the reflectivity of sound from various surfaces available inside the hall. If the reflection is coherent, the reverberation time of the hall will be longer; the sound will take more time to die out. The reverberation time RT60 and the volume V of the room have great influence on the critical distance (conditional equation):

where critical distance is measured in meters, volume is measured in , and reverberation time is measured in seconds. The following relationships follow from above approximation:

By the other hand, waves and particles are really two sides of a same coin: the quantum field (non-classical artifact!). Excitations of fields are particles. Fields propagate and extend their local (non-local?) influence through wave equations (Wavy motion! Wibbly wobbly timey wimey stuff!). No joke: people assumes that time is a strict progression from time to effect, but actually, from a nonlinear non subjective viewpoint, time is more like a big ball, of wibbly wobbly, timey wimey …Stuff…I wanted to write that…Lol Long time ago…You move through time and space, and dimensions! From zero dimensional objects to higher dimensional objects and fields. Holography tells you apparent dimension could be less than real dimension reversed, that is, that real dimension is lesser than the apparent one. Just the opposite to higher dimensional theories. Perhaps, space-time is doomed and QM as well, as we discover some phenomena shedding light into all the quantum crappy stuff we have today. However, quantum mechanics seems to be true that people tried to quantize gravity…But problems remain. Our unique success approaches to quantum gravity are string theory and loop quantum gravity, not without problems both of them at current time. If spacetime is really doomed, what are the right observables or degrees of freedom? That is the crucial question for quantum gravity due to the link of black hole entropy with the microscopic, yet unveiled, microscopic fundamental degrees of freedom of spacetime. Beyond it were strings or branes, our chaosmic fate is coded into them. We really need an extension of QM and space-time going beyond QM and normal quantum field theories? Or are we missing something? Perhaps, we can only describe space-time as a fluid as its fundamental degrees of freedom are so tiny that are not accessible in any accessible future. Perhaps, we have to admit the polytropic equation is our best way, that is

is the only simple way to handle with spacetime at large scales and the quantum vacuum is just a special type of superfluid or solid. Could we stop at the level of single equations as the one by Lane-Emden? That is:

There is a mysterious connection to be explored at full power between entanglement and geometry at the level of classical algebra an geometry…The metric is just the squared version of the vielbein:

This relation contains a tensor product. Just, it is the same tensor product we use in a quantum phenomenon. Yes! Entanglement everywhere! We could envision that there is a deep dictionary between entanglement classes and geometry. In fact, work of M. Duff and others hint that too. Let me be more concrete. From the above metric definition you could think in terms of a ket

such as

and more generally

or

for antisymmetric gauge fields and for  symmetric tensor fields

Perhaps, our current knowledge of the global Universe as quantum object via inflation and models like eternal inflation are wrong simply because we are not doing the right calculations, as could be calculated perhaps with a better theory. That is a challenge for the 21st century.

See you in the next blog post!



## LOG#233. Electron microscopes.

Surprise! Second post today. It is a nice post, I believe.

Usually, we see the world using photons in certain wavelengths. Our eyes can see only a very limited width of the electromagnetic spectrum. The quantum revolution taught us that we can use other particles (and other wavelengths) to see the world and the Universe in ways we could have never ever imagined. This fact is even more general and can be thought valid even for gravitational waves (bunches of gravitons!).

Electron microscopy first from wikispaces… There are several types of electron microscopes:

1st. Transmission electrom microscopy. The original form of the electron microscope, the transmission electron microscope (TEM), uses a high voltage electron beam to illuminate the specimen and create an image. From wikipedia, The resolution of TEMs is limited primarily by spherical aberration, but a new generation of hardware correctors can reduce spherical aberration to increase the resolution in high-resolution transmission electron microscopy (HRTEM) to below 0.5 angstrom (50 picometres), enabling magnifications above 50 million times. The ability of HRTEM to determine the positions of atoms within materials is useful for nano-technologies research and development. TEM consists of an emission source or cathode, which may be a tungsten filament or needle, or a lanthanum hexaboride . Cryo-TEM is the cryoscopic modification of TEM in order to do EM for biology and precision TEM imaging. Samples cooled to cryogenic temperatures and embedded in an environment of vitreous water allows useful biological studies, and it deserved a Nobel Prize in 2017, to Jacques Dubochet, Joachim Frank, and Richard Henderson “for developing cryo-electron microscopy for the high-resolution structure determination of biomolecules in solution.

2nd. Scanning electron microscopy (SEM). The scanning electron microscope (SEM) is a type of electron microscope that produces images of a sample by scanning the surface with a focused beam of electrons. The electrons interact with atoms in the sample, producing various signals that contain information about the surface topography and composition of the sample. The electron beam is scanned in a raster scan pattern, and the position of the beam is combined with the intensity of the detected signal to produce an image. In the most common SEM mode, secondary electrons emitted by atoms excited by the electron beam are detected using an Everhart-Thornley detector. The number of secondary electrons that can be detected, and thus the signal intensity, depends, among other things, on specimen topography. SEM can achieve resolution better than 1 nanometer. It can also made cryoscopic, as Wikipedia says: “(…)Scanning electron cryomicroscopy (CryoSEM) is a form of electron microscopy where a hydrated but cryogenically fixed sample is imaged on a scanning electron microscope‘s cold stage in a cryogenic chamber. The cooling is usually achieved with liquid nitrogen. CryoSEM of biological samples with a high moisture content can be done faster with fewer sample preparation steps than conventional SEM. In addition, the dehydration processes needed to prepare a biological sample for a conventional SEM chamber create numerous distortions in the tissue leading to structural artifacts during imaging(…)”.

3rd. Serial-section electron microscopy (ssEM). One application of TEM is serial-section electron microscopy (ssEM), for example in analyzing the connectivity in volumetric samples of brain tissue by imaging many thin sections in sequence.

4th. Reflection electron microscopy (REM). In the reflection electron microscope (REM) as in the TEM, an electron beam is incident on a surface but instead of using the transmission (TEM) or secondary electrons (SEM), the reflected beam of elastically scattered electrons is detected. This technique is typically coupled with reflection high energy electron diffraction (RHEED) and reflection high-energy loss spectroscopy (RHELS). Another variation is spin-polarized low-energy electron microscopy (SPLEEM), which is used for looking at the microstructure of magnetic domains.

Non-relativistic electrons have a kinetic energy

(1)

where . Any electron that is accelerated by a voltage change its kinetic energy in a conservative way, so

(2)

where the electron charge is

Suppose that initially and , . Then, the final kinetic energy reads

(3)

where is the non-relativistic linear momentum. In the quantum realm, any particle like the electron has an associated wave and wavelength. It is the de Broglie wavelength. And it reads

(4)

Using the energy equation above, you can derive that

(5)

and thus, you can derive the equation of the (non-relativistic) electron microscopy

(6)

Indeed, you can generalize this equation to microscope of X-particles, where X-particles are particles with mass and electric charger as follows:

(7)

Good!!!! Now, some numerology. You can use the value of the Planck constant as roughly , and then you can write

(8)

(9)

and where we wrote and . If, in particular, the energy is given in (electron-volts), then you get

Example 1. at .
Example 2. at .
Example 3. at .
Example 4. at .

Imagine muon microscopy, tau particle microscopy or boson microscopy…

Now, you can enter the special relativistic electron microscope realm. Just as you have for photons (or any massless particle) and , for any relativistic particle with mass and energy (rest mass ), the kinetic energy reads , since . Again, for a conservative force set-up, . Taking into account that

the special relativity theory generalizes the de Broglie relationship (indeed, de Broglie himself used SR in his wave-particle duality!)

From you get

From you obtain algebraically

and

(10)

Inserting this momentum into the relatistic de Broglie wavelength you finally derive the full relativistic electron microscope equation

(11)

or inserting as units, you also get equivalently

(12)

Some numbers:

(13)

and where is the non-relativistic wavelength and is the full relativistic wavelength. They are linked through the following expressions:

With simple scaling rules, you can extended the relativistic electron microscopes to relativistic X-particle microscopes as follows

(14)

Definition 1 (Electron microscope). .

Definition 2 (X-particle microscope). .

Definition 3 (Electron microscope(II)). .

Definition 4 (X-particle microscope(II)). .

Definition 5 (Relativistic electron microscope). .

Definition 6 (Relativistic X-particle microscope). .

## LOG#232. Is it relativistic?

Today or not today. That is the point. Today. How to know if a given particle or sytem is (special) relativistic? That is a tricky question, since the reality is…Everythin is (special) relativistic. The question would be when you can use the usual newtonian aproximation. When you can and you can’t use the newtonian approximation? That is the subject today. </p>Firstly, newtonian physics or galilean relativity IS valid when you can safely say that

• Linear momentum is linear in velocity AND mass, .
• Kinetic energy is quadratic in velocity .
• Velocity is much lesser than the speed of light . Tricky: how much is much lesser? Without loss of generality, anything below of the speed of light is approximately relativistic but you can note special relativity in some examples, but galilean or newtonian physics is good enough for most of the cases.
• Time is absolute an universal, .
• Galilean transformations hold.

Secondly, special (Einstein’s) relativity holds whenever:

• Momentum is non-linear in velocity:

• Kinetic energy is NOT quadratic in velocity, but the total energy minus the rest energy is

• Velocity is close (or equal, for massless particles/systems) to the speed of light:

• Time is NOT universal, but relative to the observer, and time gets a dilation factor (time dilation):

• Lorentz transformations hold.

Equivalently, in galilean relativity

for a free particle, and

for a particle/system under conservative/forces.

However, in special relativity, you get

and

in general systems. Here, . Then,

so

The momentum is relativistic when . If you define , and as above, then

• A particle is galilean/newtonian iff , , , .
• A particle is special relativistic (einsteinian) iff or , or , .

Mass measuremnts are general non-linear in velocity, but invariant mass definition is possible via scalar product in Minkovski spacetime. Particles moving at exactly the speed of light are massless. This is the case of gluons, photons, and gravitons (). particles without rest mass as these particles verify , . Massive particles satisfy a more general dispersion relationship, as above

and thus

or

The special case in which you have a massive particle with is called ultrarelativistic case, and then you can approximate

Here we note the purely relativistic massless case and the ultrarelativistic case easily, and we can also distinguish the massive or almost massless case in the purely relativistic case or the ultrarelativistic case:

• Ultrarelativistic: , and .
• Relativistic regime: , or , .  Here, or .
• Non-relativistic case: arbitrary, , . Here, .

In the massive ultrarelativistic case:

In general, relativistic particles are

• Generally relativistic () with , when roughly .
• Ultrarelativistic almost massless or massive with and . This is the case of neutrinos. .

See you in other blog post!

## LOG#231. Statistical tools.

Subject today: errors. And we will review formulae to handle them with experimental data.

Errors can be generally speaking:

1st. Random. Due to imperfections of measurements or intrinsically random sources.

2st. Systematic. Due to the procedures used to measure or uncalibrated apparatus.

There is also a distinction of accuracy and precision:

1st. Accuracy is closeness to the true value of a parameter or magnitude. It is, as you keep this definition, a measure of systematic bias or error. However, sometime accuracy is defined (ISO definition) as the combination between systematic and random errors, i.e., accuracy would be the combination of the two observational errors above. High accuracy would require, in this case, higher trueness and high precision.

2nd. Precision. It is a measure of random errors. They can be reduced with further measurements and they measure statistical variability. Precision also requires repeatability and reproducibility.

1. Statistical estimators.

Arithmetic mean:

(1)

Absolute error:

(2)

Relative error:

(3)

Average deviation or error:

(4)

Variance or average quadratic error or mean squared error:

(5)

This is the unbiased variance, when the total population is the sample, a shift must be done from to (Bessel correction). The unbiased formula is correct as far as it is a sample from a larger population.

Standard deviation (mean squared error, mean quadratic error):

(6)

This is the unbiased estimator of the mean quadratic error, or the standard deviation of the sample. The Bessel correction is assumed whenever our sample is lesser in size that than of the total population. For total population, the standard deviation reads after shifting :

(7)

Mean error or standard error of the mean:

(8)

If, instead of the unbiased quadratic mean error we use the total population error, the corrected standar error reads

(9)

Variance of the mean quadratic error (variance of the variance):

(10)

Standard error of the mean quadratic error (error of the variance):

(11)

2. Gaussian/normal distribution intervals for a given confidence level (interval width a number of entire sigmas)

Here we provide the probability of a random variable distribution X following a normal distribution to have a value inside an interval of width .

1 sigma amplitude ().

(12)

2 sigma amplitude ().

(13)

3 sigma amplitude ().

(14)

4 sigma amplitude ().

(15)

5 sigma amplitude ().

(16)

6 sigma amplitude ().

(17)

3. Error propagation.

Usually, the error propagates in non direct measurements.

3A. Sum and substraction.

Let us define and . Furthermore, define the variable . The error in would be:

(18)

Example. , . , with   and , with . Then, we have:

as liquid mass.

, as total liquid error.

is the liquid mass and its error, together, with 3 significant digits or figures.

3B. Products and quotients (errors).

If

then, with you get

(19)

If , you obtain essentially the same result:

(20)

3C. Error in powers.

With , , then you derive

(21)

and if , with the error of being , you get

(22)

In the case of a several variables function, you apply a generalized Pythagorean theorem to get

(23)

or, equivalently, the errors are combined in quadrature (via standard deviations):

(24)

since

(25)

for independent random errors (no correlations). Some simple examples are provided:

1st. , with , implies .

2nd. , with , implies .

3rd. would imply

When different experiments with measurements are provided, the best estimator for the combined mean is a weighted mean with the variance, i.e.,

(26)

This is also the maximal likelihood estimator of the mean assuming they are independent AND normally distributed. There, the standard error of the weighted mean would be

(27)

Least squares. Linear fits to a graph from points using least square procedure proceeds as follows. Let from be some sets of numbers from experimental data. Then, the linear function that is the best fit to the data can be calculated with , where

Remark: for non homogenous samples, the best estimation of the average is not the arithmetic mean, but the median.

See you in other blog post!

## LOG#230. Spacetime as Matrix.

Surprise! Double post today! Happy? Let me introduce you to some abstract uncommon representations for spacetime. You know we usually represent spacetime as “points” in certain manifold, and we usually associate points to vectors, or directed segments, as , in dimensional spaces IN GENERAL (I am not discussing multitemporal stuff today for simplicity).

Well, the fact is that when you go to 4d spacetime, and certain “dimensions”, you can represent spacetime as matrices or square tables with numbers. I will focus on three simple examples:

• Case 1. 4d spacetime. Let me define as isomorphic spaces, then you can represent spacetime as follows

(1)

and where is a complex number ().

• Case 2. 6d spacetime. Let me define as isomorphic spaces, then you can represent spacetime as follows

(2)

and where is a quaternion number , with .

• Case 3. 10d spacetime. Let me define as isomorphic spaces, then you can represent spacetime as follows

(3)

and where is

is an octonion number with .

Challenge final questions for you:

1. Is this construction available for different signatures?
2. Can you generalize this matrix set-up for ANY spacetime dimension? If you do that, you will understand the algebraic nature of spacetime!

Hint: Geometric algebras or Clifford algebras are useful for this problem and the above challenge questions.

Remark: These matrices are useful in

• Superstring theory.
• Algebra, spacetime algebra, Clifford algebra, geometric algebra.
• Supersymmetry.
• Supergravity.
• Twistor/supertwistor models of spacetime.
• Super Yang-Mills theories.
• Brane theories.
• Dualities.
• Understanding the Hurwitz theorem.
• Black hole physics.

## LOG#229. Mars organics.

Hi, there. Short post today ignites a new category post. Life and Chemistry. The search of life outside Earth, and beyond, is a goal for the current and forthcoming centuries (provided we are not extincted). First targets for life searches in the Solar System include: solar system planets, some of their moons and maybe comets.

Between all the solar system planets (neglecting Moon by naive assumptions, perhaps we should reconsider that if there is ice and water enough on our satellite), Mars is an ideal place to search for life. Other targets like Titan, Ganymede, Ceres, Pluto and others will not be covered today. What do we know about Mars? Mars has a thin atmosphere made of carbon dioxide. And, after decades, we also know some chemical compounds on Mars (with more or less uncertainty):

• Sulfur-like compounds. A list includes
1. Thiophen .
2. Methyl thiophenes .
3. Methanethiol .
4. Dimethyl sulfide .
5. Benzothiophene .
• Non sulfur-like compounds.
1. Benzene .
2. Toluene (or tropylium ion ).
3. Alkylbenzenes ( or benzoate ion ).
4. Chlorobenzene .
5. Nophtahlene .

Smaller molecules seen on Mars (of course, beyond ) are:

1. Carbonyl sulfide .
2. Oxygen .
3. Carbon disulfide .
4. Carbon monoxide .
5. Hydrogen sulfide .
6. Sulfur dioxide .
7. Methane . The origin of martian methane is yet a mystery. Proof of life, interior geology or exotics or complex mechanims? We do not know.

There are lots of carbon-chain molecules, with about 1 up to 5 carbon atoms likely on Mars soil. Would we find out azobenzene molecules on Mars? Likely not. Of course, we will not found superconductors or in principle, but now we do know there is water below the Mars soils. Ice. And likely salty subterran water cycles. Are there bacteria and other life beings, even microscopic or bigger, on Mars right no? Impossible to say yet! New rovers will try to uncover the biggest mysteries on the red planet and finally to decide if there is some kind of microorganisms or even life beings hidden on Mars! We will know in this century for sure! If not on Mars, Titan and other places of the Solar System like Enceladus have prospective high odds to sustain some king of life. Europe as well. But do not try to land there ;).

See you in another blog post!

## LOG#228. The scientific method.

What is the scientific method? There are many definitions out there, but I am providing mine, the one I explain to my students, in this short post.

SCIENTIFIC METHOD (Definition, not unique)

A (cyclic) method/procedure to gather/organize, check (verify or refute) and test, conserve/preserve and transmit/communicate knowledge (both in form of data or organized abstract data/axioms/propositions) or more generally information, based on:

• Experience. By experience we understand observation of natural phenomena, original thoughts, common sense perceptions and observed data from instruments or data. You can also gather data with emulation or simulation of known data, in a virtual environtment.
• Intuition and imagination. Sometimes scientific ideas come from experience, sometimes from intuitions and abstractions from real world and/or structures. You can also use imagination to test something via gedanken or thought experiments tied to the previous experiences or new experiences, or use computer/AI/machines to creatively check or do inferences.
• Logic and mathematical language. Logic, both inductive and deductive, is necessary for mathematical or scientific proofs. Since Galileo, we already know that Mathematics is the language in which Nature is better described with. We can also say that this includes reasoning or reason as a consequence.
• Curiosity. The will to know is basic for scientists. No curiosity, no new experiments, observations, theories or ideas.

The scientific method has some powerful tools:

• Computers and numerical simulations. This is new from the 20th century. Now, we can be aided by computer calculations and simulations to check scientific hypothesis or theories. Machine learning is also included here as subtool.
• Statistics and data analysis. Today, in the era of Big Data and the Rise of AI, this branch and tool from the scientific method gains new importance.
• Experimental devices to measure quantities predicted or expected from observations and or hypotheses, theories or models.
• Rigor. Very important for scientists, and mathematicians even more, is the rigor of the method and analysis.
• Scientific communication, both specialized and plain for everyone. Scientists must communicate their results and findings for testing. Furthermore, they must try to make accessible the uses of their findings or why they are going to be useful or not in the future.

Scientific method can begin from data, or from theories and models. Key ideas are:

• (Scientific) Hypothesis. Idea, proposition, argument or observation that can be tested in any experiment. By experiment, here, we understand also computer simulations, numerical analysis, observation with telescope or data analysis instruments, machine/robotic testing, automatic check and/or formal proof by mathematical induction or deduction.
• An axiom is a statement that is assumed to be true without any proof, based on logical arguments or experience.
• A theory is a set of tested hypotheses subject to be proven before it is considered to be true or false. A theory is also a set of statements that is developed through a process of continued abstractions and experiments. A theory is aimed at a generalized statement or also aimed at explaining a phenomenon.
• A model is a purposeful representation of reality.
• A conjecture is proposition based on inconclusive grounds, and sometimes can not be fully tested.
• A paradigm (Kuhn) is a distinct set of concepts or thought patterns, including theories, research methods, postulates, and standards for what constitutes legitimate contributions to a field.

What properties allow us to say something is scientific and something is not? Philosophy of science is old and some people thought about this question. Some partial answers are known:

• Falsifiability. Any scientific idea or hypothesis or proposition can be refuted and tested. Otherwise is not science. It is a belief. Scientific stuff can be refutable and argued against with. Experiments or proof can be done to check them. Kuhn defended the addition of additional ad hoc hypotheses to sustain a paradigm, Popper gave up this approach.
• Verification of data or hypotheses/theories/arguments. Even when you can refute and prove a  theory is wrong, verification of current theories or hypotheses is an important part of scientific instruments.
• Algorithmic truths and/or logical procedures. Science proceeds with algorithms and/or logic to test things.  Unordered checking looses credibility. Trial and error is other basic procedure of Science.
• Heuristics arguments based on logic and/or observations. Intuition and imagination can provide access to scientific truths before testing.
• Reproducibility. Any experiment or observation, in order to be scientific, should be reproducible.
• Testable predictions. Usually, theories or hypotheses provide new predictions, not observed before.

The scientific method is an iterative, cyclical process through which information is continually revised. Thus, it can be thought as a set of 4 ingredients as well:

• Characterizations (observations, definitions, and measurements of the subject of inquiry).
• Hypotheses (theoretical, hypothetical explanations of observations and measurements of the subject).
• Predictions (inductive and deductive reasoning from the hypothesis or theory).
• Experiments (tests of all of the above).

Pierce distinguished between three types of procedures:

• Abduction. It is a mere “guess”, intuitive and not too formal.
• Deduction. It includes premises, explanations and demonstrations.
• Induction. A set of classification, probations and sentient reasoning.

From a pure mathematical and theorist way, there are only knowing and understanding facts, analysis, synthesis and reviews or extensions of information/knowledge. From the physical or experimentalist viewpoint, however, we have more:

• Characterization of experiences and observations.
• Proposals of hypotheses.
• Deductions and predictions from hypotheses.
• Realization of tests and experiments (gathering data).

Note that, from a simple viewpoint, the scientific method and/or main task of Science is to study:

• Regularities, patterns and relationships between objects and magnitudes.
• Anomalies or oddities, generally hinting something new beyond standard theories.
• Reality as something we measure and the link between observers and that reality. What is reality after all? Hard question from the quantum realm side…

By the other hand, a purely bayesianist approach to Science is also possible. In a Bayesian setting, Science is only a set up to test the degree of belief of any proposition/idea/set of hypotheses/model/theory. Theories provide measurable observables and quantities, and scientific predictions are only valid up to certain confidence level with respect some probabilistic distributions. This probabilistic approach to Science does not exclude the existence of purely true or false hypotheses, a frequentist approach to data and error analysis (it complements that tool), and it only focuses on a framework to estimate the probability of propositions, data vectors and experimental parameters fitting certain probability distributions “a prior”.

How to elucidate the degree of (scientific) belief of something? W. K. Clifford discussed this topic with Jaynes in order to give a list. In the Ethics of Belief was argued that: rules or standards that properly govern responsible belief-formation and the pursuit of intellectual excellence are what philosophers call epistemic (or “doxastic”) norms. Widely accepted epistemic norms include:

• Don’t believe on insufficient evidence.
• Proportion your beliefs to the strength of the evidence.
• Don’t ignore or dismiss relevant evidence.
• Be willing to revise your beliefs in light of new evidence.
• Avoid wishful thinking.
• Be open-minded and fair-minded.
• Be wary of beliefs that align with your self-interest.
• Admit how little you know.
• Be alert to egocentrism, prejudice, and other mental biases.
• Be careful to draw logical conclusions.
• Base your beliefs on credible, well-substantiated evidence.
• Be consistent.
• Be curious and passionate in the pursuit of knowledge.
• Think clearly and precisely.
• Carefully investigate claims that concern you.
• Actively seek out views that differ from your own.
• Be grateful for constructive criticisms.
• Persevere through boring or difficult intellectual tasks.
• Be thorough in your intellectual work.
• Stick up for your beliefs, even in the face of peer pressure, ridicule, or intolerance.

Unanswered questions by Science are yet to be provided:

1. Why mathematics is so accurate and precise to describe Nature?
2. Why is the Universe comprehensible and non-chaotic but regular and structured in general? It could have been very different!
3. Why numbers and structures are so efficient?
4. Is Science affected by the Gödel theorems or does it go beyond its applicability?
5. Can Science explain everything?
6. Are chaos and other mathematical universes possible and physically realizable or ideally are only unfeasible?

Usually, the scientific method contained theory and experiment only. Now, it also include: computation, big data, machine learning and AI tools!

See you in another blog post!

## LOG#227. Cosmic energy.

Short post number two! Surprise!

Have you ever wondered what is the cosmic energy of the Universe? Well, giving up certain General Relativity issues related to the notion of energy in local sense, there is indeed a global notion of energy for the Universe as a whole. I am not considering the Multiverse as an option today. Let me begin with the High School notion of mass and density, particularized for the Universe:

We are considering a closed spherical Universe with 3-d geometry, and then its volume reads

What is the radius of the Universe? Well, we could take it as the Hubble radius of the observable Universe, i.e.,

where . The density of the Universe can be written as the cosmological value of the vacuum/Hubble scale

so . Therefore, the formula for the mass of the Universe in terms of fundamental constants is

and the expression for the cosmic energy follows up from special relativity greatest formula as

Also, defining and , you get

Remark: the cosmological constant fixes not only the biggest mass as the Universal mass of the Universe (I am sorry for pedantic expression), but also fixes the smallest possible mass (the so-called Garidi mass in de Sitter group or de Sitter relativity):

And you can thus prove that

Note, that

and thus

See you in other blog post!!!!!