Cosmology and the Creation of the Universe

last update: 30 September 2020


Understanding the creation of stars and planets starts with a few 'simple' questions. What exactly is a star? Where did the Universe come from? Was there really a Big Bang? Is space really empty? What is dark matter?

Wikipedia tells us that the
Big Bang theory is the prevailing cosmological description of the development of the Universe. According to this theory, space and time emerged together 13.799±0.021 billion years ago and the energy and matter initially present have become less dense as the Universe expanded. After an initial accelerated expansion called the inflationary epoch at around 10-32 seconds, and the separation of the four known fundamental forces, the Universe gradually cooled and continued to expand, allowing the first subatomic particles and simple atoms to form.

It is difficult to imagine that less than 100 years ago, we did not know about the existence of most of the
Universe around us. From today's perspective, the reality of a very large, old, expanding Universe, filled with billions of galaxies that are receding from each other as the cosmic space expands from an initial Big Bang billions of years ago seems so obvious that we expect it must have been known for centuries. Not so, as we will learn on this webpage.

What I am going to try to describe is how the
Universe has evolved. For the first 47,000 years its behaviour was dominated by radiation, notably the relativistic constituents such as photons and neutrinos. From ca. 370,000 years after the Big Bang the Universe has been dominated by matter. During this matter-dominated era the expansion of the Universe began to slow down, as gravity reined in the initial outward expansion. Scientists now think that from about 9.8 billion years after the Big Bang the Universe slowly stopped decelerating, and then gradually began to expand and accelerate again.

In different texts there is also the mention of a 'hot' Big Bang, which is more of less defined as occurring after the inflationary epoch (leaving Big Bang as a simple generic term including the inflationary epoch and even earlier periods).

Note that I wrote (copied) "
space and time emerged together 13.799 billion years ago", and not "the Universe is 13.799 billion years old" or "the Universe began 13.799 billion years ago". What we know is that the Big Bang started 13.799 billion years ago, but we don't really know how long the Universe had to wait for the Big Bang to get going, or if anything had to 'wait' at all. We need to be very careful in the use of words, because we can't say "the Universe is 13.799 billion years old", but we can certain say "the Earth is about 4.5 billion years old".

Another point worth noting is that we wrote
"space and time emerged…" and not "matter rushed into space …", nor did we suggest some kind of initial explosion simply because we used the term Big Bang. Explosions happen within pre-existing space. But here the suggestion is that some 'objects' were, at one moment in time, inside a very small space and the next moment they were inside a bigger space. The objects did not change or 'move', or were affected in any way (even if density changed). No heat or pressure was applied to them, space just appeared so that there were bigger distances between the objects. In the world of classical mechanics, space was just where stuff happened, whereas today spacetime is a thing that can grow, shrink, deform, wiggle, and change shape. In fact the objects did not move, so the space between them could appear and grow much, much faster than the speed of light. I've read that this also implies that any initial singularity would have had to contain all the energy and spacetime of the future Universe.

A final point worth noting is that as we go back in time the physical nature of
matter becomes increasingly speculative, despite the fact that we appear to be able to retain the concepts of time and space over sixty orders of magnitude.

Cosmology and the Big Bang

Physical cosmology is concerned with the origin, structure, and chronology of the Universe, starting with the Big Bang.

The first problem is that the
Big Bang theory does not actually say that the Universe started with a bang. In fact it says nothing about the actual moment of birth of the Universe, although some experts say that we can go back to 10-43 seconds just after the birth. In any case our story must start when the Universe was much, much smaller, and much, much hotter. The postulate is that in the very distant past the Universe was so hot that matter, atoms, nuclei did not exist. We know this because we have something called the cosmic microwave background. Also extensive galaxy surveys have quantified the distribution of galaxies with a level of detail unthinkable only a few years ago. In addition we now know immeasurably more about both intergalactic space and dark matter.

recent publication (June 2020) noted that in fact we don't know exactly when the first stars formed in the Universe because they haven't been observed yet. And it looks as if the first stars and galaxies may have been formed even earlier than previously estimated. Using the Hubble Space Telescope they have looked back to when the Universe was only 500 million years old, and they found no evidence of those very first stars. The suggestion is that galaxies must have formed much earlier than originally thought.
Population I stars are metal-rich young stars commonly found in the spiral arms of our Milky Way (and it includes our Sun). Population II stars are older metal-poor stars commonly found in the bulge near the centre of our Milky Way. Population III stars are a hypothetical population of massive, luminous and hot stars with virtual no metal content, thought to have existed in the very early Universe (so stars only containing hydrogen, helium and lithium).

Hubble Frontier Fields

The Hubble Deep Field is a small region of the constellation Ursa Major (often known as the great Bear). The viewing angle is very narrow so almost all the 3,000 objects are galaxies, some of which are among the youngest and most distant known. The Hubble Ultra Deep Field goes one step better and looks back approximately 13 billion years.

Cosmic Timeline

So what do we know? The European Space Agency put together the above summary of the almost 14 billion year history of the Universe, showing in particular the events that contributed to the cosmic microwave background (CMB).
The timeline in the upper part of the illustration shows an artistic large-scale view of the evolution of the cosmos. The processes depicted range from inflation, the brief era of accelerated expansion that the Universe underwent when it was a tiny fraction of a second old, to the release of the CMB, the oldest light in our Universe, imprinted on the sky when the cosmos was just 370,000 years old. And then on to the 'Dark Age' with the birth of the first stars and galaxies, which reionized the Universe when it was a few hundred million years old, and then all the way to the present time.
Tiny quantum fluctuations generated during the inflationary epoch were the seeds of future structures, i.e. today's stars and galaxies. After the end of inflation, dark matter particles started to clump around these cosmic seeds, slowly building a cosmic web of structures. Later, after the release of the CMB, normal matter started to fall into these structures, eventually giving rise to stars and galaxies.

The Cosmological Principle

Our starting point is '
The Cosmological Principle' which states that, looked at as a whole, the spatial distribution of matter in the Universe is homogeneous and isotropic. They are large-scale properties, and they don't mean the same thing. However, together they mean that whoever or whatever you are, and wherever you might be, and in whatever direction you look, the Universe appears to be more or less the same.
Homogeneous just means that there is no preferred position to observe the Universe. The implication is that the average density of matter is about the same everywhere, and viewed from a distance the Universe is fairly smooth.
Isotropic means that no matter where you look there are no differences in the structure of the Universe. If the Universe were isotropic for any observer anywhere, then that would also mean that the Universe was homogeneous.
In today's practice world there is a tendency to consider
gases, liquids and solids are three unrelated types of matter. Gases are covered by the gas laws, liquids are viscous and have surface tension, and solids follow the laws of elasticity, but in this text we will focus more the fundamental interactions that are common to all forms of matter.
It is said that
Edward A. Milne (British, 1896-1950), developer of the Milne model for the Universe, was first to coin the expression 'The Cosmological Principle'.

Obviously there are some physical variations today, e.g.
stars, planets, voids, and even entire galaxies, but the idea is that looking at the Universe as a whole it's still very homogeneous and isotropic. So up-close the Universe might not be that homogeneous, but at some point, as you zoom-out, homogeneity takes over. Some writers have suggested that clumps of stars might look like discrete structures, but they are no more than "crumbs which make up the interior of a cake".

The Cosmological Principle' also implies that all the stuff in the Universe must be governed by the same set of physical laws. And this is true no matter who you are, where you are, or where you are looking, i.e. those physical laws have no preferred location and no special direction (and we might also assume that any physical constants we find will also be universal).
There is an argument that the true origin of the Universe may never be known (or observable), but that the beginning of the operation of the laws of nature can be understood. On the other hand, in 1917 Einstein's view of the Universe was one that was uniform and spatially closed, corresponding to a positive curvature of space. So it was finite yet with no boundary, and it contained a finite number of stars. Also it was static in the sense that the curvature of space and the mean density of matter remained constant. Later models would extend and enrich Einstein's field equations.

Another implication is that we are not at the
centre of the Universe, but then again no one else is either.
The Copernican Principle states that we are not privileged observes of the Universe. This principle replaced the old Ptolemaic system, which placed Earth at the centre of the Universe. An isotropic Universe means that there is no centre to the Universe, and the Big Bang did not occur in any particular place.

It's also important to understand that '
The Cosmological Principle' refers to space, not time. In fact the Universe is neither homogeneous nor isotropic over all of time. However, in so far as the principle applies to physical laws, we assume that those same laws do hold for what we observe from the past. When we look at the light from distant stars we know we are looking at what happened in the past, but we still want to assume that the same physical laws applied at that time. Making the assumption that the Universe is homogeneous and isotropic allows us to calculate its history back to just 10-43 seconds after its birth.

Just looking up at the
night-sky it's normal that people have a few simple questions about what they see. One question concerns our very dark night-sky and all those billions of stars in the Universe. Why are we not constantly surrounded by the bright lights of all those stars? In fact why don't we have an infinitely bright sky? This is called Olbers' Paradox, although both Johannes Kepler (German, 1571-1630) and Edmond Halley (English, 1656-1741) had asked the same question in the past.
Obviously we are not surrounded by stars, so there must be an explanation. There could be too much
dust in space. Perhaps there are are not enough stars to cover our night-sky. Maybe the stars are not uniformly distributed (isotropic) over our night-sky. Maybe the Universe is expanding, and the really distant stars are being redshifted into obscurity. Maybe the Universe is so young, that all the light from all those billions of stars hasn't yet reached us.
If there is
dust it also would heat up after a while and then radiate like a star. And if you really wanted the dust to act as a kind of shield, you would expect to detect it between the Earth and our own Sun. Hundreds of billions of stars are still a finite number, so perhaps there aren't enough stars to light up our night-sky. And we can't be absolutely sure that some stars aren't hiding behind others, and leaving some empty (dark) spaces.
Combining the last two suggestions it is possible that they are collectively responsible for all the 'missing'
light. Light has a finite speed and we can only see light from stars that is less than about 13.8 billion years old. Edwin Hubble (American, 1889-1953) discovered the systematic recession of the galaxies, and so it was initially thought that light from distant galaxies had been redshifted beyond visible wavelengths (so we would not see all those stars). However today, experts think that the most important effect is that, given a cosmic event horizon of about 13.8 billion light-years, photons from very distant galaxies simply have not had the time to arrive to the Earth. And with only about 1011 galaxies within the Earth's cosmic horizon, there are not enough stars to light up its night-sky.

Based on the average size of a
star (say, 5 light-seconds across) and the average separation between stars (about 1000 light-years), to fill our night-sky we would have to include all the light emitted 1024 light-years ago. And frankly we don't know how long stars have been around for. And we do know that after some time stars just fade and stop emitting light.

With the results of Hubble we understood that the more distant a galaxy was, the faster it was receding. So in the future the great distances between objects would continue to increase. It was in 1945 that George Gamow (Russian, 1904-1968) extrapolated back in time, showing that the Universe would have been denser and perhaps more uniform. Perhaps radiation was more energetic in the past, and hence the Universe was hotter. So perhaps there was a time when large galaxies did not exist, perhaps before that there were no stars, and perhaps before that there was just a cloud of plasma, and perhaps before that, …

Another question is about the endlessness of the
Universe, or if there are boundaries somewhere. The usual answer involves the idea that the Universe contains everything, therefore you can't have something outside 'everything' (i.e. nothing can influence the Universe from 'outside'). So the Universe is boundless, which does not automatically mean its infinite (e.g. the surface of a sphere is boundless but certainly not infinite). Space is something physical and must be contained inside the Universe. Also given that the Universe contains everything, it must also contain the means for creating itself.
Aristotle (Greek, 284-322 BC) took a different line in concluding that the material Universe was spatially finite and filled with a finite number of stars. He argued that an infinite number of stars would never be able to spin around the Earth in 24 hours. So you had a finite number of stars, and you would need only a finite space for them. Giordano Bruno (Italian, 1548-1600) did ask what was on the outside the finite Universe, but in lieu of an answer he was burned at the stake. Today he is remembered for proposing that stars were distant suns surrounded by their own planets.
On the other hand, in 1917
Albert Einstein (German, 1879-1955) proposed that a mathematical model of the Universe in which the total volume of space was finite yet had no boundary or edge. In addition Hubble showed that the distribution of galaxies was close to uniform when average over a large volume of space and there was no observable boundary or edge.

Yet another question concerns the idea that if the cosmos had a beginning, then something must have caused it. Inherent in classical physics is the idea that cause precedes the effect, so how can there be a causal agent before the beginning when time did not exist? However in quantum mechanics processes can occur without a cause, and radioactive decay is the usual example. When an atom transmutes into another atom, we can say that the new atom was created, i.e. it did not exist before the decay. And the decay process is governed by the probabilistic laws of quantum mechanics, which means that no cause can be given as to why the new atom was created at that specific moment. Yet the old atom did exist, so there is an implicit temporal link, as there must be between the before and after of the Big Bang.

Can we test these basic ideas? There are two pieces of evidence, the first is the cosmic microwave background (CMB) and the second is redshift. In many ways these two pieces are more than just evidence, they are measurements that must be explained by any model of the Universe.

In addition there were other results that were predicted by the
Big Bang theory. The first was a cosmic infrared background (CIB), which is different from the cosmic microwave background. The second was the cosmic neutrino background (CNB).

But before we look at these intriguing topics, we will need to 'calibrate' ourselves, and look at how astronomers think about time, space, and the 'magnitudes' that they observe in the Universe.

Time and Distance

In the introduction we mentioned that the concepts of
time and space appear to be valid over sixty orders of magnitude. How can we understand such 'astronomical' distances and such 'mind-warping' periods of time?

Cosmic Distance Ladder

Distance measurement is not just about determining the scale of the Universe, it also helps in understanding the physical nature of astronomical objects. The cosmic distance ladder describes the way astronomers determine distances. We are told that there is no single technique that can measure distances over the entire Universe. The approach is to use a variety of distance measurement techniques, each working over a limited range of distances. Distances can be measured directly, (primary), or calibrated against a primary measurement (secondary), or eventually calibrated against a secondary technique (tertiary). As you might guess primary techniques can only be used for objects that are nearby. Secondary and tertiary techniques are used for sources within and beyond the Virgo cluster respectively (the Virgo cluster is more than 50 million light-years away from Earth). So we have a hierarchic procedure where the closet distance indicators are used to calibrate farther distances.

The first 'rung' on our
astronomical ladder is the Astronomical Unit which can be measured in a number of different ways. The astronomical unit (AU) is a unit of length that equates to the distance between the Earth and the Sun. The AU was originally envisaged as an average, but since 2012 it has been attributed an exact distance of 149 597 870 700 metres. Given that the speed of light is now also expressed as exactly 299 792 458 m/s, we know that the time for light to traverse an AU is slightly more than 8 minutes 19 seconds (i.e. we think of this as the time it takes for sunlight to arrive on the Earth from the Sun). The AU is typically used for distances within the solar system, but is too small to be useful for interstellar distances.

The most obvious way to determine the distance to a nearby
planetary body, is to bounce an electromagnetic pulse of it and detect the partially reflected signal. Radio waves (think of radar) and laser beams both travel at the speed of light (a fundamental or universal physical constant), but the techniques are affected by atmospheric absorption and steep attenuation due to the distances and poor reflectivity of planetary bodies. It certainly helped that a retroreflector array was placed on the Moon in 1969 and the Earth-Moon distance can now be measured to a sub-mm level of accuracy.

The next technique is
parallax, which is a very ancient technique in which an object is viewed along two lines of sight and the distance is determined by the angle between them. This technique is often termed trigonometric parallax, and the same trigonometrical principles are used in theodolite measurements.


The technique exploits the fact that as the Earth orbits the Sun, the portion of nearby stars appear to shift slightly against the more distant background. In measuring a star's location at two different positions in the Earth's orbit, the two lines of sight form a triangle, from which a parallax angle can be deduced. The distance to the star can be calculated knowing the Earth-Sun distance (1 AU). Obviously the closer the object the larger the parallax, but unfortunately all stellar parallaxes are less than 1 arcsecond.

A full rotation is made up of 360 degrees (°), where each degree is divided into 60 minutes of arc (arcmin or '), and each minute of arc is divided into 60 seconds of arc (arcsec or "). You can also find references to milliarcseconds, microarcseconds, and even square degrees and square arcminutes. Interestingly the 'nautical mile' was originally defined as one minute of latitude.
full Moon has an average apparent size of about 31 arcminutes (or 0.52°), and the arcminute is about the resolution limit of the human eye. The operational resolution of the Hubble Space Telescope is almost 0.1 arcseconds.

The closest star system to our solar system is Alpha Centauri, which has a parallax of 0.754 arcsec, so stellar parallaxes cannot be measured with the naked eye. In fact it was in 1838 that Friedrich Bessel (German, 1784-1846), at the Koenigsberg Observatory, was credited with the first use of the stellar parallax in calculating the distance to a star. He announced a parallax of 0.314 arcseconds for 61 Cygni, with an error of less than 10%.

At that time
astronomers were looking to define an absolute magnitude (a measure of luminance) so that they could compare different stars which were at the same distance from Earth. But what unit of stellar distance would be sensible? And what to call it?

William Herschel (German-English, 1738-1822) had estimated the distance to Sirius and used that as the average distance to stars of the first magnitude. It has been said that both a certain F. Roberts in 1694 and Johann Elert Bode (German, 1747-1826) in 1768, suggested that the distance covered by light could be the ultimate way to measure the distances to the stars. So it would appear that the light-year was already in use in the 18th century as a measure of stellar distances. François Arago (French, 1786-1853) produced a table of stellar parallaxes expressed in years (of light), and in 1865 Camille Flammarion (French, 1842-1925) mentioned that the light from Sirius took 22 years to get to Earth. But we don't know when the exact expression light-year was first employed by astronomers, although it did appear in an English dictionary in 1868. At the beginning of the 20th century there were several suggestions for defining and naming an astronomical distance, and it all came to a head in 1913. The problem with light-year was that it is not easy to link a 'light travel time' to a parallax measurement. It would appear that Herbert Hall Turner (English, 1861-1930) had suggested 'parsec' during a lecture in 1913. Ejnar Hertzsprung (Danish, 1873-1967) used 1 parsec as a reference in 1913, and Arthur Eddington (English, 1882-1944) used it in 1914. Some people wanted a word that could be understood by the non-specialist, some thought light-year was usable and parsec was practical but 'ugly', and others just went ahead and used parsec. In 1919 there was a proposal to use light-year for popular articles and parsec for the specialist, but in 1922 it was decided to just use parsec. However it must be said that it was only in the early 1930's that parsec became fully accepted.

As a quick aside, James Bradley (English, 1692-1762) in attempting to detect stellar parallaxes actually discovered the aberration of light. He made the first direct observational proof of the Copernican theory that the Earth moves around the Sun. He detected the slow nodding of the Earth's axis due to the Moon's attraction and called it 'nutation', and later he would also improve on the accuracy of the speed of light. Even in the 19th century his name sat alongside that of Kepler, but today he has been almost totally forgotten.

But what exactly is a
parsec (unit pc)? Firstly Wikipedia tells us that it is a portmanteau word of 'parallax of one arcsecond'. Below we can see that two photographs of the same nearby star taken six months apart show that the star has appeared to move against the more distant stars in the background. The distance the star has moved is related to the angle at which it is viewed, and the sightlines create a triangle, with the parallax angle being half the angle at the triangle's apex. The parsec is the distance 3.26 light-years, i.e. the distance of the star from the Sun for a parallax angle of exactly 1 arcsecond (").


The parsec (pc), or kph, Mpc, or even Gpc, as a distance measurement is inferred from an angular signature using parallax. But it is a useful metric because it provides a manageable sense of distance, e.g. the diameter of the Milky Way is about 30 kpc, our Sun is located at about 8 kpc from the galactic centre, and the Andromeda galaxy, our nearest galaxy, lies at about 0.78 Mpc.

The next problem is that even for a parallax of 0.01 arcsec, which would represent a distance of about 100 parsec's, it would only enclose a few hundred of the our nearest stars. And even going to 0.001 arcsec would extend out to about 1000 parsec's, but would still only include around 100,000 stars. In fact by 1980 only about 8000 stellar parallaxes had been published, and less than 2000 were measured to better than a 10% relative uncertainty.

Then in 1989 a new era of
astrometry was launched with the Hipparcos satellite. In 1997 a catalogue of 118,200 stars was published with an accuracy of 0.002 arcseconds, and this was followed up in 2000 with a catalogue of 2.5 million stars with an accuracy of 0.03 arcseconds. Michael Perryman has published two excellent reviews, firstly Hipparcos: a Retrospective in 2011 and then The History of Astrometry in 2012.

This work was followed up by the
Gaia satellite (2013-2022), which is still operating. The data release in 2018 included astrometric data on 1.38 billion sources, with parallax accuracy of 0.00004 arcseconds for the brighter sources, and 'only' 0.0007 arcseconds for the faintest sources (the final astrometric accuracy target is 0.000025 arcseconds).
While the data is clearly of scientific interest, it can also be summarised into some quite compelling headlines. For example, the best accuracy is equivalent to being able to spot a coin lying on the surface of the
Moon. Also Gaia found 620,000 stars (or 'unresolved stellar systems') within 100 parsecs and 5200 within 20 parsecs. This yields a typical density of 0.15 stars per cubic parsec in the neighbourhood of our solar system (that is about 1 star per 1041 cubic kilometres or per about 300 cubic light-years). Using that typical density yields a mean distance between neighbouring stars of about 1.04 parsecs, as compared to 1.3 parsecs for the distance between the Sun and its nearest star Proxima Centauri.

The next techniques on our
astronomical ladder are secular and classical statistical parallaxes, sometime both are referred to as simply statistical parallax.

Astronomical constants

The Cosmic Microwave Background

The microwave background radiation provides a clear picture of the young
Universe, where weak ripples on an otherwise uniform background display a pattern that convincing supports the 'standard model' for the cosmic mass/energy budget and for the prices that initially imprinted cosmic structure. At the time there were no planets, stars, galaxies, non of the striking large-scale structures we can see today. The richness of the observed astronomical view that we see today grew later in a complex and highly non-linear process driven primarily by gravity.

CRB Plank

This CMB picture was collected in 2013 by the Planck satellite. It is a snapshot of the oldest light in the Universe, and dates from when the Universe was just 380,000 years old. What we see are tiny temperature fluctuations that correspond to regions of slightly different densities, and these represent the seeds of all future structures, e.g. the stars and galaxies we see today. We are not looking at an absolute measure of temperature, but a difference between the measured temperature and an averaged temperature.

The discovery of the cosmic microwave background (CMB) in 1964 confirmed a key prediction of the Big Bang cosmology. From that point on, it was generally accepted that the Universe started in a hot, dense state and has been expanding over time. The rate of expansion depends on the types of matter and energy present in the Universe, and in particular, whether the total density is above or below the so-called critical density. During the 1970's, most attention focused on pure-baryonic models, e.g. based on protons and neutron or more generally composite sub-atomic particles made of three quarks. But the models were serious challenged to explain the formation of galaxies, given the small anisotropies in the CMB measured at that time. In the early 1980's, things could be explained if cold dark matter (CDM) dominated over the baryons in the theory of cosmic inflation. During the 1980's, most research focused on models with around 95% cold dark matter and 5% baryons, however observations around 1988-90 showed more large-scale galaxy clustering than predicted.
These difficulties were highlighted when in 1992 new measurements showed that CMB anisotropy was higher than previously thought. Several modified cold dark matter models were studied during the mid-1990's, and the ΛCDM model (lambda Cold Dark Matter) became the leading model following the observations of accelerating expansion in 1998. So far more precise satellite-based measurements of the cosmic microwave background continue to support the model.
One problem is that the ΛCDM model has no explicit physical theory for the origin or physical nature of dark matter or dark energy. On the other hand, we now realise that galaxy formation is an ongoing process, not an event that just created some "island universes" (galaxies) in the distant past.

But the motion of Earth, 370 km/s relative to the local cosmic microwave background (CMB), is observed as the large-scale Doppler dipole anisotropy, as shown in figure 5. Doppler fluctuations caused by local motions in the early uni- verse contributed to the small-scale CMB anisotropy that helps to determine the early uniformity of mass distributions and the fraction of dark matter in the universe.
FIGURE 5. ANISOTROPY IN THE COSMIC MICROWAVE BACK- GROUND. The Doppler dipole anisotropy (left) is caused by the motion of Earth. The small-angle anisotropy (right), after subtracting the dipole, is caused partly by Doppler scattering of photons in the early universe. (Images courtesy of NASA.)

Doppler Effect, Redshift and Hubble's Law

We must stop for a moment and think about a time when suddenly our 'small' galaxy-sized Universe became an expanding Universe over 10 billion years old and containing 100 billion galaxies. What appeared in around 1925 was proof that external galaxies lay at huge distances, much greater than between objects in our own galaxy.
Something called a
redshift 𝒛 was defined as

𝒛 = 𝝺/𝝺0 - 1

where a spectral line of an emitted wavelength 𝝺0 is later observed at a wavelength 𝝺.
For nearby
objects, the redshift corresponds to a recession velocity 𝒗 which is given by a simple Doppler formula, 𝒗 = c𝒛. Hubble showed that a relation existed between distance and redshift (see below).

Hubble's Data 1929

The idea of Hubble was that more distant galaxies recede faster. This observation suggested that the Universe as a whole was expanding. The relation between the recession velocity and distance is linear, and the dependence must be the same when observed from any other galaxy. Hubble's Law tells us that galaxies are moving away from the Earth at speeds proportional to their distance. You often find the equation

v = H0D

where v is the speed of separation, D the proper distance to a galaxy (which will change over time), and H0 a constant of proportionality, the Hubble Constant. The value proposed by Hubble in 1929 is a factor of nearly 10 steeper than the currently accepted value. The inverse of the Hubble Constant is Hubble Time, which is the time since a linear cosmic expansion had begun, e.g. extrapolating a linear Hubble's Law to time t = 0. The age of the Universe is approximately 14 billion years for a H0 of 70.

The above graph is a copy of Hubble's results for the motion for 23 individual galaxies. Hubble used the radial velocities for the galaxies from the work of Vesto Slipher (American, 1875-1969), but did not reference it. He also built on the previous work of Willem de Sitter (Dutch, 1872-1934), Georges Lemaître (Belgium, 1894-1966), Howard P. Robertson (American, 1903-1961), and Henrietta Swan Leavitt (American, 1868-1921). And he had access to the most powerful telescope at that time, the 100-inch Hooker telescope at Mount Wilson. We have to remember that Einstein had to introduce a cosmological constant into his equations to keep the Universe static, and now Hubble had shown that the Universe had been expanding all the time. Both Alexander Friedmann (Russian, 1888-1925) and Lemaître had developed non-static solutions to Einstein's equations, and Hubble's results proved that they had been on the right track.

Hubble's Law (2014)

Above we have a compilation of results dating from 2014 which shows that despite Hubble's total miscalculation of the Hubble Constant his fundamental discovery of the expanding Universe is not affected, and the underlying relationship between velocity and distance remains valid. Using Hubble's Law enables the determination of Hubble distances to galaxies and quasars. With redshift surveys this allows the determination of the spatial location and distribution of those galaxies and quasars. And this revealed a remarkable interconnected large-scale networks of galaxies, filaments and voids.

Hubble's original relationship of velocity vs. distance demonstrates that a really simple graph can still change forever the way we see our place in the Universe.

In the above compilation of results, H
0 = 70 (always in kilometres per second per megaparsec). Now H0 sets the absolute scale, size and age of the Universe, and is one of the most direct ways to quantify how the Universe is evolving. Recent results have cast a doubt over this figure of H0 = 70, with one team obtained a value 74 and another team of 67.4. The first result was obtained with the highest precision data for the Cepheid variables used as distance markers. The second result was predicted using the latest measurements of the cosmic microwave background. So a third set of measurements were made using a different kind of star, the red giant. This time they found H0 = 69.8, so close to the original 70 but straddling the latest values. So rather than a tie-breaker, there are still open questions about either the measurements performed or the cosmological model used, or both. Everyone is now waiting for the next round of yet higher precisions surveys planned for the mid-2020's.

At the largest scale, the Hubble effect is a cosmological redshift caused by the expansion of space rather than an actual Doppler effect.

The expression 'proper distance' looks pretty innocuous, but in
cosmology nothing is innocuous. Proper distance is an invariant measure between two spacelike-separated events whose value is the same for all observers, i.e. a distance as measured in an inertial frame of reference in which the events are simultaneous. The original proposal, dating from 1922, defined radial velocity as a function of luminosity distance measured from apparent magnitude. Since this first proposal the main observational problem was the accuracy in determining H0. Radial velocity (in km/s) can be measured with a high accuracy, so the problem was in measuring the distance. This is why the Hubble Constant started out at about 525 (kilometres per second per megaparsec) and is now about 70. The key was a result from 1977 which showed that the width of the 21-cm line of natural hydrogen was linearly correlated with the absolute magnitude (the Tully-Fisher Relation). I don't intend to go beyond this very basic description, however this paper reviews other additional biases, and notes that Hubble's Law becomes increasingly uncertain at large distances.
Proper distance is not the only measure of distance, there is also comoving distance.

Now let's turn back to the question of redshift, and let's start with Christian Doppler (Austrian, 1803-1853) who in 1842 gave his name to the Doppler effect. This effect describes why we hear a higher frequency sound when a train is approaching and a lower frequency sound when the train is moving away. So more generally it is the perceived change in frequency (and wavelength) emitted by a source moving relative to the observer. The effect is used in ultrasound devices for observing blood flow in arteries and in radar traps for speeding cars. The usual descriptions for the Doppler frequency shift is presented for sound waves and water waves. As seen in the first panel below we have a sound source at rest, and the frequency will be dependent on the period between two maxima. In the next panel we see the source moving at a constant velocity. As it moves towards the observer the two maxima will be closer together, and the frequency is shifted upwards. If it moves away from the observer the two maxima will become more separated and the frequency will shift downwards. Natural if the observer is also moving towards the source or away from the source, then they will see the maxima arrive even faster or slower respectively. Of course if the source and observer are moving at the same speed in the same direction, then no frequency change would be detected. It's this change in frequency as a function of relative movement between the source and observer that is called the Doppler effect. Even if the source and observer are moving at an angle relative to each other, the same basic logic concerning the arrival of maxima holds true.

Pulses from a Source

But what happens when our sound source approaches the speed of sound, or surpasses it? In the third panel we can see that the source is travelling at the speed of sound, and all the maxima are bunched up in the direction of movement (i.e. we can see them bunched up at the 'nose' of the sound source). These wavefronts are pressure, so there is a rapid increase in pressure from the accumulated maxima followed by a large decrease in pressure from the minima. The rapid change in pressure produces the sonic boom. We can see that an observer would not hear the sound until the source arrived next to them. In addition they would not have heard the frequency shift upwards as the source approached. All the sound waves that the observer did not hear as the source approached, are dumped on the observer all together at the same time producing a 'thump' or 'bang' sound. It we take an airplane just approaching the speed of sound, what is happening is that a cone of air molecules is moving outward and rearward in all directions. The change of pressure is quite limited, but the rate of change (sudden onset of the pressure change) makes the sonic boom audible.
In the fourth panel above, the source has gone
supersonic and the sonic boom goes from a straight wavefront perpendicular to the motion of the source to being bent backwards in a cone, i.e. the sonic boom is now at an angle to the motion of the source. So the sonic boom occurs constantly at all speeds above the speed of sound.
Just checking, you should know why the speed of sound in hydrogen is 1,270 m/s, but only 326 m/s in oxygen, and yet its 5,120 m/s in an iron bar. The reason is that oxygen is more dense than hydrogen, but although iron is denser than both it is far more elastic.

Sound Waves

In the above diagram the sound waves from a source moving faster than the speed of sound spread sphericaly from the point where they are emitted, but the source moves ahead of each wave. Constructive interference along the lines shown (actually a three-dimensional cone) creates a shock wave which is called a sonic boom, and is situated along the 'leading edge' of the cone. The faster the source, the smaller the angle 𝜃 of the shock wave. Inside the cone the interference is mostly destructive, so the sound intensity is much less than for the shock wave. A common misconception is that the sonic boom occurs as the plane breaks the sound barrier, i.e. accelerates to a speed higher that the speed of sound. Actually, the observer only hears the sonic boom when the trailing-edge of the shock wave sweeps past them on the ground. Usually the aircraft will create two shock waves, one from the nose and one from the tail (but often the observer will hear them as one boom).
One of the problems is that a shock wave is not really a wave, and the circles in the diagram do not help in that they suggest wave-like fronts. The reality is that a shock wave is not oscillatory, it is a discontinuity that follows the Rankine-Hugoniot conditions. A shock wave is the final stage of a nonlinear steepening wave that has reached a balance between steepening and energy dissipation. A fine example is with a water wave that breaks because there was insufficient energy dissipation to limit the steepening. In simple terms the steepening of a wave as it propagates is balanced by losses (dispersion, diffusion, viscosity, friction, etc.). If the losses dominate then the wave is too damped. If the steepening is balanced against the losses a shock wave starts as the gradient approaches the wave break or sonic boom. With a water wave, the faster parts of wave can outrun the slower parts, and the wave breaks.

Vapour Cone

If you are really out to impress someone, then try and explain the above photograph. People mistakenly say that it's the sonic boom, but in fact it is a vapour cone (or shock collar). The reason is that humid air is entering a low-pressure region. In the cone created behind the shock wave there is a reduced local density and temperature, and this is sufficient to cause water to supersaturate around the aircraft and condense. So it's just like a little cloud that disappears as soon as the pressure increases again. The Wikipedia article mentions that the effect is mistakenly called a Prandtl-Glauert singularity, whereas the 'singularity' was originally used to justify the idea that you could not fly faster than the speed of sound.
In fact it was once argued that an aircraft could not exceed the speed of sound because the pressure would destroy the structure, but that was proven wrong when Chuck Yeager (American, born 1923) broke the speed of sound on October 14, 1947. However, during WW I sound ranging was used to locate distant artillery. This technique drew on the knowledge of shock-wave patterns of projectiles in flight.

This model of
wavefronts works just as well for light as for sound, at least if the velocity is much less that the speed of light (and remembering that we cannot go faster than the speed of light in a vacuum). We can pick the inertial reference frame we want, and so we can place the source in the rest frame and move the observer at a constant velocity. Moving towards a light source the frequency of the arrival of wave crests will increase, and if the observer is moving very fast the time dilation effect would need to be factored in.
astronomy they have defined a redshift 𝒛 as

𝒛 = ∆𝝺/𝝺 = 𝝺/𝝺0 - 1 = 1 - 𝝂0/𝝂

where a spectral line of an emitted wavelength 𝝺0 is later observed at a wavelength 𝝺 (𝝂0 and 𝝂 are the corresponding frequencies).

A positive
𝒛 is a redshift and a negative 𝒛 is a blueshift. This just means that the signal is redshifted to longer wavelengths (red being the longest visible wavelength), or blueshifted to shorter wavelengths. Often 'red' just means longer wavelengths, and 'blue' shorter wavelengths. For objects at low velocity compared to the speed of light in a vacuum

𝒛 v/c

where v is positive for sources moving away from the observer/receiver, and c is the speed of light in a vacuum. In the classical Doppler effect, the frequency of the source is not modified, but the recessional motion causes the illusion of a lower frequency.

So sources receding from the observer/receiver appear
redshifted, while sources moving towards the observer appear blueshifted. The reality is that we find every object we look at is redshifted, and the further away it is the more redshifted it is, i.e. just like what would happened in a big explosion.

If we go back to the formula

v = H0D

we can substitute v (the speed of separation) to obtain

𝒛 H0D/c

D the proper distance to a galaxy (which will change over time), H0 the Hubble Constant, and c is the speed of light in a vacuum.

Doppler had enunciated his ideas for acoustic waves he also predicted that the effect should apply to all types of waves. I understand that it was C.H.D. Buys Ballot (Dutch, 1817-1890) who actually verified Doppler's theory for sound. In 1842 Doppler published a paper on the coloured light of double stars. Although Doppler was mistaken in his assumption that stellar motion would cause a change in the broad-spectrum colour of a star, his derivation of frequency shifts was correct.
It was
Hippolyte Fizeau (French, 1819-1896) who in 1848, following his early measurements of the speed of light, proposed independently the same idea for spectral lines of stars. It would appear that it was in 1868 that William Huggins first applied the Doppler-Fizeau principle to the measurement of the radial velocity of a star moving away from the Earth. Sirius was observed to have a recessional velocity of 29 miles per second (uncorrected for the Earth's motion).

But it was Hermann Vogel who provided the first conclusive demonstration of the optical Doppler effect. If fact using the Doppler shift on star spectral lines of the
Sun he was able to calculate its the equatorial rotational speed. In 1892 Vogel went on to observe Doppler line shifts in stars and provide the first accurate stellar radial velocities (speed along the line of sight). It was Aristarkh Belopolsky who in 1901 finally made a laboratory demonstrated of the optical Doppler effect using a narrow-linewidth light source and rapidly rotating mirrors.

The earliest occurrence of the term “red-shift” in print (in a hyphenated form), appears to be by American astronomer Walter S. Adams in 1908, where he mentioned "Two methods of investigating that nature of the nebula red-shift" . The word doesn't appear unhyphenated until about 1934 by Willem de Sitter. Beginning with observations in 1912, Vesto Slipher discovered that most spiral nebulae had considerable redshifts. Then, in 1915, Albert Einstein developed his equations of general relativity. These equations connect matter and energy with the geometry of spacetime. In essence, these formulae describe the laws of conservation, e.g. conservation of energy and momentum. These equations were solved by Karl Schwarzschild in 1916. It describes the spacetime geometry surrounding a sphericaly symmetric object of mass M, situated at the spatial coordinate r = 0. This metric gives an excellent description of the spacetime geometry around objects like the Sun and the 's, that are, to a good approximation, spherical and forms the basis for an examination of the three classic tests of Einstein's theory of general relativity: 1) the perihelion advance of Mercury, 2) the bending of starlight by the Sun and 3) the “gravitational redshift”. Later, Edwin Hubble discovered an approximate relationship between the redshift of nebulae, such as those observed by Slipher, and the distance to them, with the formulation of his Hubble’s Law . This law is simply a statement that the redshift in light coming from a distant astronomical entity (for example, a galaxy) is proportional to its distance. These observations corroborated Alexander Friedmann’s work from 1922, in which he derived the famous Friedmann equations, a set of equations that governs the expansion of space in homogeneous and isotropic models of the Universe within the context of general relativity. This is considered to be the first observational basis for the expanding space paradigm and today serves as one of the most often cited pieces of evidence in support of the Big Bang theory. However, it may be interesting to note that Hubble, even up to his final lecture before the Royal Society, always held open the possibility that the redshift did not mean velocity of recession but might be caused by something else.

So what is the physical nature of redshift? The first suggestion concerning a cosmological redshift dates from 1916, and at numerous times the reality of an expanding Universe was questioned. It took until 2001 to conclude that the Tolman surface brightness test was "consistent with the reality of the expansion". In the modern ΛCDM model the redshift is interpreted as a space expansion and not due to the Doppler effect.

but it fails for
light and other electromagnetic waves, since the speed is not relative to an underlying medium, but to the observer.

On a larger scale, the velocity curves of stars within galaxies, which provide some of the most compelling evidence for the existence of dark matter, are observed by Doppler spec- troscopy.

I just love physics when one idea leads to another. The
shock wave is just one example of a broader phenomena called bow wakes.

Shock waves are one example of a broader phenomenon called bow wakes. A bow wake, such as the one in (Figure), is created when the wave source moves faster than the wave propagation speed. Water waves spread out in circles from the point where created, and the bow wake is the familiar V-shaped wake, trailing the source. A more exotic bow wake is created when a subatomic particle travels through a medium faster than the speed of light travels in that medium. (In a vacuum, the maximum speed of light is in the medium of water, the speed of light is closer to 0.75c.) If the particle creates light in its passage, that light spreads on a cone with an angle indicative of the speed of the particle, as illustrated in (Figure). Such a bow wake is called Cerenkov radiation and is commonly observed in particle physics.
there is also a kind of electromagnetic analogue to a shock wave known as Cherenkov radiation (Wikipedia article is here )that is created when a charged particle travels through a medium at a velocity faster than that of the phase velocity of light in the medium (which for many media is some fraction of c).

Galaxies equidistant from us, all moving away at the same speed
Galaxies twice as far, are moving away twice as fast
So going back in time, all galaxies will come together at the same instant at ~1/H0 14 x 109 yr (given the present expansion rate: H0 70 km/s/Mpc)
i.e. the entire universe originated in a ‘Big Bang’ about 14 billion years ago
... but this was the birth of space-time, not an explosion in space!
The redshift cannot be a Doppler shift because there are galaxies with z > 1 ... their recession speed cannot be just cz!
To get around this problem some texts suggest using the Special Relativistic formula:
This is one of the most distant galaxies known (having a measured spectrum) at z ~ 7
... But then all galaxies at high z would have the same speed ~0.999999... times c
so their distribution cannot be homogeneous any longer this would violate the Copernican Principle!
The redshift of distant galaxies should not be interpreted as a Doppler effect ... it is not a concept appropriate to curved space-time
The redshift occurs because the wavelength of light is increased by the stretching of space-time
So if we can look back far enough in time, we should see a hot, dense ‘fireball’ covering the sky

Finally we have found the ‘bright sky’ we were looking for ... but this primordial light from the hot plasma of the early universe (about 400,000 years after the Big Bang) has been redshifted to microwave frequencies ...

“Our entire observable universe is inside this sphere of radius 13.3 billion light-years, with us at the center. Space continues outside the sphere, but this opaque glowing wall of hydrogen plasma hides it from our view. This censorship is frustrating, since if we could see merely 380000
light-years beyond it, we would behold the beginning of the universe”

The Cosmic Infrared Background

cosmic infrared background (CIB),

The Cosmic Neutrino Background

cosmic neutrino background (CNB)
Cosmic neutrino background

Big Bang

Chronology of the Universe

Timeline of Epochs in Cosmology


At the time Vilenkin wrote his paper the picture of the very early universe had undergone a revolution in the form of the so-called inflation theory. To put it briefly, according to this theory there was an extremely brief phase in the history of the very early universe, shortly after the Planck time, in which empty space expanded at a stupendous speed. Although the inflation lasted from only 10-36 sec to about 10-33 sec after t = 0, during this brief interval of time space expanded by a factor of 1030 or more. The basic mechanism responsible for the huge expansion is believed to be a hypothetical “inflaton field” which can be represented by a quantum version of the cosmological constant appearing in Einstein’s equations. This constant has the remarkable property that it leads to a negative pressure and an associated vacuum energy density (rather than the energy itself) which remains constant. It follows that the inflation generates an enormous amount of energy – almost out of nothing. After the brief inflationary phase, the much slower normal expansion of the now very hot and energy-rich space takes over.
It all sounds very exotic, almost incredible, but most cosmologists consider the inflation scenario, in one of its many versions, to be convincing because of its explanatory and predictive power. They believe that we know what the universe looked like just 1035 sec after t = 0. This is most interesting but cynics will argue that it does not bring us nearer to answering the question of the ultimate origin. Where did the inflaton field come from?
The success of the inflation theory drew increased attention to the role of the cosmological constant as a measure of the energy density of the vacuum. However, it was not studies of the very early universe that confirmed Einstein’s cosmological constant but astronomical observations of the present expansion rate. In the late 1990s it turned out that the universe is accelerating, meaning that it expands at an increasing rate as if it is blown up by a self-repulsive “dark energy.” The precise nature of this dark energy is still unknown but the consensus view is that it is a manifestation of the vacuum energy associated with the cosmological constant. This
strange kind of energy actually dominates the present universe, as it makes up roughly two-thirds of all energy and matter in the universe and in the future will dominate even more. While the discovery of dark energy has great consequences for the far future of the universe it is not equally relevant for the very early universe. On the other hand, it underlines the importance of the vacuum energy density as a fundamental characteristic of our universe.
The original inflation theory was soon developed into versions of “eternal inflation” primarily by Vilenkin and Andrei Linde, who suggested that in the universe as a whole, new inflating regions will be produced more rapidly than non- inflating regions. Inflation is self-generating, if not in our observed universe then in the much bigger and presumably infinite universe at large. According to Linde, “the universe is an eternally existing, self-reproducing entity that is divided into many mini-universes much larger than our observable portion, and ... the laws of low- energy physics and even the dimensionality of space-time may be different in each of these mini-universes.”35

Big Bang: The Planck Epoch

From t=0 to t=10
-43 seconds after the Big Bang

The Planck Epoch – or The Planck Era – is the time period ranging from the emergence of the Big Bang (t=0 sec) till Planck’s time (t=10-43 sec). It is named after the German physicist Max Planck. In this extremely tiny period, all the laws of physics related to our life break down.
Scientists so far, have not succeeded to know what happened within this duration after the big bang. Although the LHC collider at CERN has succeeded in bringing significant observational results like the Higgs Boson, it has never been able to find anything to describe the Planck Epoch. The reason behind this is that any scientific experiment occurs on a timescale much longer than the order of Planck’s time.
In order to know what really happened during the Plack Epoch, physicists have to reach the unification of the 2 most popular theories: General Relativity (physics of large scales) and
Quantum Mechanics (physics of small scales).
However, let’s focus on what we know about Planck Era, even though the information is little.
The size of the
universe within this duration was too small that it didn’t exceed the size of an atom. The universe started growing after the Big Bang. Therefore, such a tiny period of time is not sufficient to allow a universe of a large-scale to exist.
Another known reality is the huge amount of temperature, pressure, and density that prevailed. The temperature during Planck Era exceeded a decillion (10
33) Kelvin and the pressure was quintillions (1018)of times the pressure of a swimming pool.
The light had not existed in the Planck Era because fundamental particles were unable to combine under such temperature and pressure. Thus no electrons orbiting the nucleus. In other words, there was no such thing as an atom.
And the mystery continues. Anything occurs under Planck’s time is out of the scope of the human knowledge. Therefore research continues in an attempt to reach a clear vision of what happened during this period.

Planck Epoch

The Planck epoch is an era in traditional (non-inflationary) Big Bang cosmology immediately after the event which began the known universe. During this epoch, the temperature and average energies within the universe were so high that everyday subatomic particles could not form, and even the four fundamental forces that shape the universe — gravitation, electromagnetism, the weak nuclear force, and the strong nuclear force — were combined and formed one fundamental force. Little is understood about physics at this temperature; different hypotheses propose different scenarios. Traditional big bang cosmology predicts a gravitational singularity before this time, but this theory relies on the theory of general relativity, which is thought to break down for this epoch due to quantum effects.[9]
In inflationary models of cosmology, times before the end of inflation (roughly 10−32 seconds after the Big Bang) do not follow the same timeline as in traditional big bang cosmology. Models that aim to describe the universe and physics during the Planck epoch are generally speculative and fall under the umbrella of "New Physics". Examples include the Hartle–Hawking initial state, string theory landscape, string gas cosmology, and the ekpyrotic universe.

Planck Epoch

In Big Bang cosmology, the Planck epoch or Planck era is the earliest stage of the Big Bang, before the time passed was equal to the Planck time, tP, or approximately 10−43 seconds.[32] There is no currently available physical theory to describe such short times, and it is not clear in what sense the concept of time is meaningful for values smaller than the Planck time. It is generally assumed that quantum effects of gravity dominate physical interactions at this time scale. At this scale, the unified force of the Standard Model is assumed to be unified with gravitation. Immeasurably hot and dense, the state of the Planck epoch was succeeded by the grand unification epoch, where gravitation is separated from the unified force of the Standard Model, in turn followed by the inflationary epoch, which ended after about 10−32 seconds (or about 1010 tP).[33]

Grand Unification Epoch

Between 10-43 seconds and 10-36 seconds after the Big Bang

Grand Unification Epoch

As the universe expanded and cooled, it crossed transition temperatures at which forces separated from each other. These phase transitions can be visualized as similar to condensation and freezing phase transitions of ordinary matter. At certain temperatures/energies, water molecules change their behaviour and structure, and they will behave completely differently. Like steam turning to water, the fields which define our universe's fundamental forces and particles also completely change their behaviours and structures when the temperature/energy falls below a certain point. This is not apparent in everyday life, because it only happens at far higher temperatures than we usually see in our present universe.
These phase transitions in the universe's fundamental forces are believed to be caused by a phenomenon of quantum fields called "symmetry breaking".
In everyday terms, as the universe cools, it becomes possible for the quantum fields that create the forces and particles around us, to settle at lower energy levels and with higher levels of stability. In doing so, they completely shift how they interact. Forces and interactions arise due to these fields, so the universe can behave very differently above and below a phase transition. For example, in a later epoch, a side effect of one phase transition is that suddenly, many particles that had no mass at all acquire a mass (they begin to interact differently with the Higgs field), and a single force begins to manifest as two separate forces.
Assuming that nature is described by a so-called Grand Unified Theory (GUT), the grand unification epoch began with a phase transitions of this kind, when gravitation separated from the universal combined gauge force. This caused two forces to now exist: gravity, and an electrostrong interaction. There is no hard evidence yet, that such a combined force existed, but many physicists believe it did. The physics of this electrostrong interaction would be described by a Grand Unified Theory.
The grand unification epoch ended with a second phase transition, as the electrostrong interaction in turn separated, and began to manifest as two separate interactions, called the strong and the electroweak interactions.

Grand Unification Epoch

In physical cosmology, assuming that nature is described by a Grand Unified Theory, the grand unification epoch was the period in the evolution of the early universe following the Planck epoch, starting at about 10−43 seconds after the Big Bang, in which the temperature of the universe was comparable to the characteristic temperatures of grand unified theories. If the grand unification energy is taken to be 1015 GeV, this corresponds to temperatures higher than 1027 K. During this period, three of the four fundamental interactionselectromagnetism, the strong interaction, and the weak interaction—were unified as the electronuclear force. Gravity had separated from the electronuclear force at the end of the Planck era. During the grand unification epoch, physical characteristics such as mass, charge, flavour and colour charge were meaningless.
The grand unification epoch ended at approximately 10−36 seconds after the Big Bang. At this point several key events took place. The strong force separated from the other fundamental forces. It is possible that some part of this decay process violated the conservation of baryon number and gave rise to a small excess of matter over antimatter (see baryogenesis). This phase transition is also thought to have triggered the process of cosmic inflation that dominated the development of the universe during the following inflationary epoch.

Units of Energy

Energy is the ability of a physical system to do work on another physical system. And since work is defined as a force acting through a distance, energy is always equivalent to the ability to exert a pull or push against the basic force of nature, along a path of certain length. The electronvolt (eV) is a unit of energy equivalent to about 1.602 x10-19 joule. The electronvolt is the amount of energy gained by the charge of a single unbound electron accelerated through an electrostatic potential difference of one volt, in vacuo. In other words, an electronvolt is 1 volt (1 joule per coulomb) multiplied by the electron charge.
Energy is directly proportional to temperature, where the temperature in Kelvin is the energy divided by Boltzmann's constant, k=1.380649 x 10-23 Joule/Kelvin.

So 1
eV is approximately equal to 104 K, or 1 K is approximately equal to 10-4 eV.
Degrees Celsius (°C) was defined by 100 degrees between the ice-point and steam-point, whereas the Kelvin (K) is based upon the triple point of water. In the past people did use °K, but the 'degrees' was rendered obsolete by international agreement in 1967.

Given that
energy is equivalent to mass, it is also common practice to use eV also as a unit of mass.

The concept of
temperature is roughly a measure of the average energy of the particles. Energy can be connected to the velocity of physical particles, which must mean that it's connected mass, time, and length. For relativistic particles (e.g. photons) temperature is related to the average energy density which demands a well-defined energy and volume. Finally temperature is a statistical concept which has little meaning for individual particles.

Different epochs are mentioned here in terms of transition temperatures which give rise to a
time (starting in seconds) for each cosmic epoch.

Electroweak Epoch

Between 10-36 seconds and 10-32 seconds after the Big Bang

Electroweak Epoch
Depending on how epochs are defined, and the model being followed, the electroweak epoch may be considered to start before or after the inflationary epoch. In some models it is described as including the inflationary epoch. In other models, the electroweak epoch is said to begin after the inflationary epoch ended, at roughly 10−32 seconds.
According to traditional big bang cosmology, the electroweak epoch began 10−36 seconds after the Big Bang, when the temperature of the universe was low enough (1028 K) for the electronuclear force to begin to manifest as two separate interactions, called the strong and the electroweak interactions. (The electroweak interaction will also separate later, dividing into the electromagnetic and weak interactions.) The exact point where electrostrong symmetry was broken is not certain, because of the very high energies of this event.

Electroweak Epoch

In physical cosmology, the electroweak epoch was the period in the evolution of the early universe when the temperature of the universe had fallen enough that the strong force separated from the electroweak interaction, but was high enough for electromagnetism and the weak interaction to remain merged into a single electroweak interaction above the critical temperature for electroweak symmetry breaking (159.5±1.5 GeV [1] in the Standard Model of particle physics). Some cosmologists place the electroweak epoch at the start of the inflationary epoch, approximately 10−36 seconds after the Big Bang.[2][3][4] Others place it at approximately 10−32 seconds after the Big Bang when the potential energy of the inflaton field that had driven the inflation of the universe during the inflationary epoch was released, filling the universe with a dense, hot quark–gluon plasma.[5] Particle interactions in this phase were energetic enough to create large numbers of exotic particles, including stable W and Z bosons and Higgs bosons. As the universe expanded and cooled, interactions became less energetic and when the universe was about 10−12 seconds old, W and Z bosons ceased to be created at observable rates.[citation needed] The remaining W and Z bosons decayed quickly, and the weak interaction became a short-range force in the following quark epoch.
The electroweak epoch ended with an electroweak phase transition, the nature of which is unknown. If first order, this could source a gravitational wave background.[6][7] The electroweak phase transition is also a potential source of baryogenesis,[8][9] provided the Sakharov conditions are satisfied.[10]
In the minimal Standard Model, the transition during the electroweak epoch was not a first or a second order phase transition but a continuous crossover, preventing any baryogenesis,[11][12] or the production of an observable gravitational wave background.[6] [7] However many extensions to the Standard Model including supersymmetry and the Two-Higgs-doublet model have a first order electroweak phase transition (but require additional CP violation).

Inflationary Epoch

Before 10-32 seconds after the Big Bang
Inflationary Epoch

At this point of the very early universe, the metric that defines distance within space suddenly and very rapidly changed in scale, leaving the early universe at least 1078 times its previous volume (and possibly much more). This is equivalent to a linear increase of at least 1026 times in every spatial dimension—equivalent to an object 1 nanometre (10−9 m, about half the width of a molecule of DNA) in length, expanding to one approximately 10.6 light-years (100 trillion kilometres) long in a tiny fraction of a second. This change is known as inflation.
Although light and objects within spacetime cannot travel faster than the speed of light, in this case it was the metric governing the size and geometry of spacetime itself that changed in scale. Changes to the metric are not limited by the speed of light.
There is good evidence that this happened, and it is widely accepted that it did take place. But the exact reasons why it happened are still being explored. So a range of models exist that explain why and how it took place—it is not yet clear which explanation is correct.
In several of the more prominent models, it is thought to have been triggered by the separation of the strong and electroweak interactions which ended the grand unification epoch. One of the theoretical products of this phase transition was a scalar field called the inflaton field. As this field settled into its lowest energy state throughout the universe, it generated an enormous repulsive force that led to a rapid expansion of the metric that defines space itself. Inflation explains several observed properties of the current universe that are otherwise difficult to account for, including explaining how today's universe has ended up so exceedingly homogeneous (similar) on a very large scale, even though it was highly disordered in its earliest stages.
It is not known exactly when the inflationary epoch ended, but it is thought to have been between 10−33 and 10−32 seconds after the Big Bang. The rapid expansion of space meant that elementary particles remaining from the grand unification epoch were now distributed very thinly across the universe. However, the huge potential energy of the inflation field was released at the end of the inflationary epoch, as the inflaton field decayed into other particles, known as "reheating". This heating effect led to the universe being repopulated with a dense, hot mixture of quarks, anti-quarks and gluons. In other models, reheating is often considered to mark the start of the electroweak epoch, and some theories, such as warm inflation, avoid a reheating phase entirely.
In non-traditional versions of Big Bang theory (known as "inflationary" models), inflation ended at a temperature corresponding to roughly 10−32 seconds after the Big Bang, but this does not imply that the inflationary era lasted less than 10−32 seconds. To explain the observed homogeneity of the universe, the duration in these models must be longer than 10−32 seconds. Therefore, in inflationary cosmology, the earliest meaningful time "after the Big Bang" is the time of the end of inflation.
After inflation ended, the universe continued to expand, but at a much slower rate. About 4 billion years ago the expansion gradually began to speed up again. This is believed to be due to dark energy becoming dominant in the universe's large-scale behaviour. It is still expanding today.
On 17 March 2014, astrophysicists of the BICEP2 collaboration announced the detection of inflationary gravitational waves in the B-modes power spectrum which was interpreted as clear experimental evidence for the theory of inflation.[11][12][13][14][15] However, on 19 June 2014, lowered confidence in confirming the cosmic inflation findings was reported [14][16][17] and finally, on 2 February 2015, a joint analysis of data from BICEP2/Keck and the European Space Agency's Planck microwave space telescope concluded that the statistical "significance [of the data] is too low to be interpreted as a detection of primordial B-modes" and can be attributed mainly to polarized dust in the Milky Way.

Inflationary Epoch

In physical cosmology the inflationary epoch was the period in the evolution of the early universe when, according to inflation theory, the universe underwent an extremely rapid exponential expansion. This rapid expansion increased the linear dimensions of the early universe by a factor of at least 1026 (and possibly a much larger factor), and so increased its volume by a factor of at least 1078. Expansion by a factor of 1026 is equivalent to expanding an object 1 nanometer (10−9 m, about half the width of a molecule of DNA) in length to one approximately 10.6 light years (about 62 trillion miles) long.
Inflationary epoch[edit]
The expansion is thought to have been triggered by the phase transition that marked the end of the preceding grand unification epoch at approximately 10−36 seconds after the Big Bang. One of the theoretical products of this phase transition was a scalar field called the inflaton field. As this field settled into its lowest energy state throughout the universe, it generated a repulsive force that led to a rapid expansion of space. This expansion explains various properties of the current universe that are difficult to account for without such an inflationary epoch.
It is not known exactly when the inflationary epoch ended, but it is thought to have been between 10−33 and 10−32 seconds after the Big Bang. The rapid expansion of space meant that elementary particles remaining from the grand unification epoch were now distributed very thinly across the universe. However, the huge potential energy of the inflaton field was released at the end of the inflationary epoch, repopulating the universe with a dense, hot mixture of quarks, anti-quarks and gluons as it entered the electroweak epoch.
On 17 March 2014, astrophysicists of the BICEP2 collaboration announced the detection of inflationary gravitational waves in the B-mode power spectrum, providing the first clear experimental evidence for cosmological inflation and the Big Bang.[1][2][3][4][5] However, on 19 June 2014, lowered confidence in confirming the cosmic inflation findings was reported.[6][7][8]
A preprint released by the Planck team in September 2014, eventually accepted in 2016, provided the most accurate measurement yet of dust, concluding that the signal from dust is the same strength as that reported from BICEP2.[9][10] On January 30, 2015, a joint analysis of BICEP2 and Planck data was published and the European Space Agency announced that the signal can be entirely attributed to dust in the Milky Way.[11] In 2015, the BICEP2, Keck Array and Planck data was combined within a joint analysis;[12] a March 2015 publication in Physical Review Letters set a limit on the tensor-to-scalar ratio of r < 0.12.

fluctuations in this “inflaton” field are blown up to macroscopic scales and converted into
genuine ripples in the cosmic energy density. These weak seed fluctuations grow under the
influence of gravity and eventually produce galaxies and the cosmic web. Simple models of
inflation predict the statistical properties of these primordial density fluctuations: their Fourier
components should have random and independent phases and a near-scale-invariant power
8 spectrum . Inflation also predicts that the present Universe should have a flat geometry. With
concrete proposals for the nature of the dark matter and for the initial fluctuation distribution, the growth of cosmic structure became, for the first time, a well-posed problem that could be tackled with the standard tools of physics.

Electroweak Symmetry Breaking

10-12 seconds after the Big Bang
The strong interaction to electroweak phase transition
temperature is 1015°K (300 GeV) at about 10-11 seconds after the Big Bang

Electroweak Symmetry Breaking

As the universe's temperature continued to fall below a certain very high energy level, a third symmetry breaking occurs. So far as we currently know, it was the penultimate symmetry breaking event in the formation of our universe, the final one being chiral symmetry breaking in the quark sector. In the Standard Model of particle physics, electroweak symmetry breaking happens at a temperature of 159.5±1.5 GeV.[21] When this happens, it breaks electroweak gauge symmetry. This has two related effects:

  1. Via the Higgs mechanism, all elementary particles interacting with the Higgs field become massive, having been massless at higher energy levels.
  2. As a side-effect, the weak nuclear force and electromagnetic force, and their respective bosons (the W and Z bosons and photon) now begin to manifest differently in the present universe. Before electroweak symmetry breaking these bosons were all massless particles and interacted over long distances, but at this point the W and Z bosons abruptly become massive particles only interacting over distances smaller than the size of an atom, while the photon remains massless and remains a long-distance interaction.
After electroweak symmetry breaking, the fundamental interactions we know of—gravitation, electromagnetic, weak and strong interactions—have all taken their present forms, and fundamental particles have their expected masses, but the temperature of the universe is still too high to allow the stable formation of many particles we now see in the universe, so there are no protons or neutrons, and therefore no atoms, atomic nuclei, or molecules. (More exactly, any composite particles that form by chance, almost immediately break up again due to the extreme energies.)

Electroweak Symmetry Breaking

In the Standard Model of particle physics, the Higgs mechanism is essential to explain the generation mechanism of the property "mass" for gauge bosons. Without the Higgs mechanism, all bosons (one of the two classes of particles, the other being fermions) would be considered massless, but measurements show that the W+, W, and Z0 bosons actually have relatively large masses of around 80 GeV/c2. The Higgs field resolves this conundrum. The simplest description of the mechanism adds a quantum field (the Higgs field) that permeates all space to the Standard Model. Below some extremely high temperature, the field causes spontaneous symmetry breaking during interactions. The breaking of symmetry triggers the Higgs mechanism, causing the bosons it interacts with to have mass. In the Standard Model, the phrase "Higgs mechanism" refers specifically to the generation of masses for the W±, and Z weak gauge bosons through electroweak symmetry breaking.[1] The Large Hadron Collider at CERN announced results consistent with the Higgs particle on 14 March 2013, making it extremely likely that the field, or one like it, exists, and explaining how the Higgs mechanism takes place in nature.
The mechanism was proposed in 1962 by Philip Warren Anderson,[2] following work in the late 1950s on symmetry breaking in superconductivity and a 1960 paper by Yoichiro Nambu that discussed its application within particle physics.
A theory able to finally explain mass generation without "breaking" gauge theory was published almost simultaneously by three independent groups in 1964: by Robert Brout and François Englert;[3] by Peter Higgs;[4] and by Gerald Guralnik, C. R. Hagen, and Tom Kibble.[5][6][7] The Higgs mechanism is therefore also called the Brout–Englert–Higgs mechanism, or Englert–Brout–Higgs–Guralnik–Hagen–Kibble mechanism,[8] Anderson–Higgs mechanism,[9] Anderson–Higgs–Kibble mechanism,[10] Higgs–Kibble mechanism by Abdus Salam[11] and ABEGHHK'tH mechanism (for Anderson, Brout, Englert, Guralnik, Hagen, Higgs, Kibble, and 't Hooft) by Peter Higgs.

And more

Quark Epoch

Between 10-12 seconds and 10-6 seconds after the Big Bang

Quark Epoch

The quark epoch began approximately 10−12 seconds after the Big Bang. This was the period in the evolution of the early universe immediately after electroweak symmetry breaking, when the fundamental interactions of gravitation, electromagnetism, the strong interaction and the weak interaction had taken their present forms, but the temperature of the universe was still too high to allow quarks to bind together to form hadrons.[22][23][better source needed]
During the quark epoch the universe was filled with a dense, hot quark–gluon plasma, containing quarks, leptons and their antiparticles. Collisions between particles were too energetic to allow quarks to combine into mesons or baryons.[22]
The quark epoch ended when the universe was about 10−6 seconds old, when the average energy of particle interactions had fallen below the binding energy of hadrons.[22]

Perhaps by 10
−11 seconds[citation needed]
Main article: Baryogenesis
Further information: Leptogenesis (physics)
Baryons are subatomic particles such as protons and neutrons, that are composed of three quarks. It would be expected that both baryons, and particles known as antibaryons would have formed in equal numbers. However, this does not seem to be what happened—as far as we know, the universe was left with far more baryons than antibaryons. In fact, almost no antibaryons are observed in nature. It is not clear how this came about. Any explanation for this phenomenon must allow the Sakharov conditions related to baryogenesis to have been satisfied at some time after the end of cosmological inflation. Current particle physics suggests asymmetries under which these conditions would be met, but these asymmetries appear to be too small to account for the observed baryon-antibaryon asymmetry of the universe.

Quark Epoch

In physical cosmology the Quark epoch was the period in the evolution of the early universe when the fundamental interactions of gravitation, electromagnetism, the strong interaction and the weak interaction had taken their present forms, but the temperature of the universe was still too high to allow quarks to bind together to form hadrons.[1] The quark epoch began approximately 10−12 seconds after the Big Bang, when the preceding electroweak epoch ended as the electroweak interaction separated into the weak interaction and electromagnetism. During the quark epoch the universe was filled with a dense, hot quark–gluon plasma, containing quarks, leptons and their antiparticles. Collisions between particles were too energetic to allow quarks to combine into mesons or baryons. The quark epoch ended when the universe was about 10−6 seconds old, when the average energy of particle interactions had fallen below the binding energy of hadrons. The following period, when quarks became confined within hadrons, is known as the hadron epoch.

Hadron Epoch

Between 10-6 seconds and 1 second after the Big Bang
A transition
temperature of 1012°K (0.2 GeV) at about 10-5 seconds after the Big Bang

The period
10-5 seconds is often called the quark-hadron phase tradition, in which the material content was mainly composed of hadrons, such as pions, neutrons, and protons. If we extrapolate back towards higher temperatures the hadrons dissolve, and the quarks break free. Some envisage this moment as a quark-gluon plasma, a hot and dense mixture of quarks and gluons.

As we move forward in time we will encounter
atoms, which have time scales determined through traditions between energy levels, but what exactly is a a quark-gluon plasma?

Hadron Epoch

The quark–gluon plasma that composes the universe cools until hadrons, including baryons such as protons and neutrons, can form. Initially, hadron/anti-hadron pairs could form, so matter and antimatter were in thermal equilibrium. However, as the temperature of the universe continued to fall, new hadron/anti-hadron pairs were no longer produced, and most of the newly formed hadrons and anti-hadrons annihilated each other, giving rise to pairs of high-energy photons. A comparatively small residue of hadrons remained at about 1 second of cosmic time, when this epoch ended.
Theory predicts that about 1 neutron remained for every 7 protons. We believe this to be correct because, at a later stage, all the neutrons and some of the protons fused, leaving hydrogen, a hydrogen isotope called deuterium, helium and other elements, which we can measure. A 1:7 ratio of hadrons at the end of this epoch would indeed produce the observed element ratios in the early as well as current universe.

Hadron Epoch

In physical cosmology, the hadron epoch started 20 micro seconds after the Big Bang.[1] The temperature of the universe had fallen sufficiently to allow the quarks from the preceding quark epoch to bind together into hadrons. Initially the temperature was high enough to allow the formation of hadron/anti-hadron pairs, which kept matter and anti-matter in thermal equilibrium. Following the annihilation of matter and antimatter, a nano-asymmetry of matter remains to the present day. Most of the hadrons and anti-hadrons were eliminated in annihilation reactions, leaving a small residue of hadrons. Upon elimination of anti-hadrons, the Universe was dominated by photons, neutrinos and electron-positron pairs. One refers to this period as the lepton epoch.

Neutrino Decoupling

Around 1 second after the Big Bang

t ~ 1 s
Recall the starting point from when the temperature was 10 MeV.

  • Photons.
  • Neutrinos and antineutrinos.
  • Electrons and antielectrons.
  • A comparatively very small number of protons and neutrons.
  • Number of protons = number of neutrons.
    • Energy difference between a proton and a neutron is small, only about 1 MeV.
    • Weak interactions establish an equilibrium
      • p + electron <--> n + neutrino.
      • p + antineutrino <--> n + antielectron.
  • No nuclei.
    • If one forms, it is is quickly broken up in, say, a collision with a high energy photon.

Now at T ~ 1 MeV three things happen at approximately the same time
(1) Neutrino's decouple

  • The reaction e+ + e- <--> neutrino + antineutrino
    is not fast enough at the lower density and lower energy.
  • Other neutrino interactions are similarly too rare to have much effect.
  • The neutrinos and antineutrinos remain, but they no longer interact significantly with the rest of matter.
  • The neutrinos should still be here.
  • They should form a neutrino background radiation with a temperature of 1.96 K.
  • They are thus a little cooler than the photon background radiation with a temperature of 2.73 K.
  • We can see the photon background, but the neutrino background remains so far undetected.
(2) e+ and e- annihilate
  • The energy it takes to make an electron plus an antielectron is about 1 MeV.
  • Even after the neutrino reactions have become ineffective, the reaction e+ + e- <--> photon + photon
    would keep the number of electrons equal to the number of photons.
  • But with the temperature dipping, the photons no longer have so much energy.
  • The reaction photon + photon --> e+ + e-
    stops happening.
  • The reaction e+ + e- --> photon + photon
  • In less than a second, almost all of the electrons and antielectrons are gone.
  • Almost all, but there were a few more electrons than antielectrons.
  • When all of the antielectrons are gone, the few extra electrons remain behind.
(3) Neutrons start to turn into protons
  • As the temperature dips, the reactions
    • p + electron --> n + neutrino.
    • p + antineutrino --> n + antielectron.
  • that turn protons into neutrons slow down because neutrons have about 1 MeV more energy than protons and it is hard to find that much energy any more.
  • But the reactions
    • n + neutrino --> p + electron.
    • n + antielectron --> p + antineutrino.
  • continue with full intensity.
  • The ratio of neutrons to protons starts to drop.
  • Just as the ratio of neutrinos to protons is dropping to zero, the neutrino reactions are becomming ineffective.
  • The ratio becomes almost frozen at (number of neutrons)/(number of protons) ~ 1/6.
  • Over the next couple of minutes, this ratio will slowly decrease to about 1/7.

Neutrino Decoupling

At approximately 1 second after the Big Bang neutrinos decouple and begin travelling freely through space. As neutrinos rarely interact with matter, these neutrinos still exist today, analogous to the much later cosmic microwave background emitted during recombination, around 370,000 years after the Big Bang. The neutrinos from this event have a very low energy, around 10−10 times smaller than is possible with present-day direct detection.[25] Even high energy neutrinos are notoriously difficult to detect, so this cosmic neutrino background (CνB) may not be directly observed in detail for many years, if at all.[25]
However, Big Bang cosmology makes many predictions about the CνB, and there is very strong indirect evidence that the CνB exists, both from Big Bang nucleosynthesis predictions of the helium abundance, and from anisotropies in the cosmic microwave background (CMB). One of these predictions is that neutrinos will have left a subtle imprint on the CMB. It is well known that the CMB has irregularities. Some of the CMB fluctuations were roughly regularly spaced, because of the effect of baryonic acoustic oscillations. In theory, the decoupled neutrinos should have had a very slight effect on the phase of the various CMB fluctuations.[25]
In 2015, it was reported that such shifts had been detected in the CMB. Moreover, the fluctuations corresponded to neutrinos of almost exactly the temperature predicted by Big Bang theory (1.96 +/-0.02K compared to a prediction of 1.95K), and exactly three types of neutrino, the same number of neutrino flavors currently predicted by the Standard Model.

Neutrino Decoupling

In Big Bang cosmology, neutrino decoupling was the epoch at which neutrinos ceased interacting with other types of matter [1], and thereby ceased influencing the dynamics of the universe at early times.[2] Prior to decoupling, neutrinos were in thermal equilibrium with protons, neutrons and electrons, which was maintained through the weak interaction. Decoupling occurred approximately at the time when the rate of those weak interactions was slower than the rate of expansion of the universe. Alternatively, it was the time when the time scale for weak interactions became greater than the age of the universe at that time. Neutrino decoupling took place approximately one second after the Big Bang, when the temperature of the universe was approximately 10 billion kelvin, or 1 MeV.[3]
As neutrinos rarely interact with matter, these neutrinos still exist today, analogous to the much later cosmic microwave background emitted during recombination, around 377,000 years after the Big Bang. They form the cosmic neutrino background (abbreviated CvB or CNB). The neutrinos from this event have a very low energy, around 10−10 times smaller than is possible with present-day direct detection.[4] Even high energy neutrinos are notoriously difficult to detect, so the CNB may not be directly observed in detail for many years, if at all.[4] However, Big Bang cosmology makes many predictions about the CNB, and there is very strong indirect evidence that the CNB exists.

Lepton Epoch

Between 1 second and 10 seconds after the Big Bang

In the lepton era the dominant contribution to the energy density came from
electrons, electron-neutrinos, and other leptons (positrons, muons, etc.)

Lepton Epoch

The majority of hadrons and anti-hadrons annihilate each other at the end of the hadron epoch, leaving leptons (such as the electron, muons and certain neutrinos) and antileptons, dominating the mass of the universe.
The lepton epoch follows a similar path to the earlier hadron epoch. Initially leptons and antileptons are produced in pairs. About 10 seconds after the Big Bang the temperature of the universe falls to the point at which new lepton–antilepton pairs are no longer created and most remaining leptons and antileptons quickly annihilated each other, giving rise to pairs of high energy photons, and leaving a small residue of non-annihilated leptons.

Lepton Epoch

In physical cosmology, the lepton epoch was the period in the evolution of the early universe in which the leptons dominated the mass of the Universe. It started roughly 1 second after the Big Bang, after the majority of hadrons and anti-hadrons annihilated each other at the end of the hadron epoch.[1] During the lepton epoch the temperature of the Universe was still high enough to create neutrino and electron-positron pairs. Approximately 10 seconds after the Big Bang the temperature of the universe had fallen to the point where electron-positron pairs were gradually annihilated.[2] A small residue of electrons needed to charge-neutralize the Universe remained along with free streaming neutrinos: an important aspect of this epoch is the neutrino decoupling.[3] The Big Bang nucleosynthesis epoch follows overlapping with the photon epoch.

Big Bang Nucleosynthesis

Between 10-2 seconds and 102 seconds after the Big Bang
temperatures from 1011°K (10 MeV) to 109°K (0.1 MeV)

This period, often called
Big Bang nucleosynthesis, is all about understanding the possible nuclear reactions among light nuclei at different temperatures.

If we differentiate between the
Big Bang and the Hot Big Bang (HBB) then the HBB starts from the first second after the Big Bang, and runs until today. We want to understand the large-scale behaviour of the Universe as a kind of 'background' against which smaller event occur, e.g. creation of stars, planets, etc. We have already introduced 'The Cosmological Principle' where the idea is that the 'background' is a smeared-out distribution of matter, a smearing that makes the Universe look quite smooth.

Density Fluctuations

We don't need to look at the details of the above graph to see that the Universe looks smoother and smoother as we probe it on larger scales, e.g. beyond the size of the largest superclusters, perturbations become smaller that the background value. So the idea is at this scale the matterdistribution can be separated into a smooth background encrusted with small perturbations. We have also seen that the temperature distribution in the cosmic microwave background (CMB) is of the order of 10-5, which support the isotropy hypothesis. In addition the isotropy hypothesis is also supported by redshift surveys and the distribution of radio galaxies, X-ray background and the Lyman-alpha forest. With the evidence of the isotropy, the outcome of the Copernican Principle is that the Universe is isotropic at every point, meaning that it is also spatially homogeneous.
Accepting this, we have already introduced the
Friedmann-Lemaître-Robertson-Walker metric that follows from the geometry properties of homogeneity and isotropy. We also mentioned the dimensionless scale factor which characterises the expansion of the Universe and is dependent upon the energy content of the Universe. To find out how the contents of the Universe determines the time evolution of the scale factor we need a theory of gravity.
The early
Universe was dominated by radiation, and its usual to talk about the thermodynamic temperature of the relativistic particle species (only photons and neutrinos).

The next step, and it's one of the most powerful outcomes of the hot
Big Bang theory, is the creation of chemical elements. The lightest of them were created in the first three minutes, when the Universe was just a very hot plasma. The process is called Big Bang nucleosynthesis and is where neutrons and protons first combine to form deuterium (hydrogen-2), helium-4, helium-3, and lithium-7 nuclei.

When the
temperature of the Universe was around 10 MeV (that's around 1010°K) corresponding to about 10-2 seconds, the ratio of neutrons to protons was controlled by the weak interaction.

Recombination Epoch

Between ca. xx years and ca. x years after the Big Bang
temperatures from 10 K (?? MeV) to 10 K (?? MeV)

At the end of the first three minutes, the Universe was a hot dense ionised plasma of primordial elements (nuclei of hydrogen, helium, lithium, deuterium, plus electrons), in thermal equilibrium with radiation, all at a common temperature. The high density and temperature kept all elements completely ionised. However, as the Universe continued to expand both the density and temperature dropped and the nuclei began to acquire electrons. Firstly, after about 20,000 years the helium nuclei acquired a single electron. Then after about 90,000 years the helium nuclei acquired a second electron and became neutral. Then about 400,000 years after the Big Bang the Universe was sufficiently cool and tenuous so that the remaining charged particles (protons and electrons) combined to form hydrogen atoms. The Universe, now composed of just neutral particles (hydrogen, helium and a little deuterium and lithium) and radiation, became transparent to photons. The time when the Universe transitioned from being completely ionised to being almost neutral is known as the Recombination Epoch.
The process of
electron capture by hydrogen and helium nuclei, and the trickle down of the electrons through the quantum states, resulted in the release of photons. These photons add to the relic radiation of the Big Bang - the cosmic microwave background (CMB). Since the photon released by electron capture can very well knock off the electron from another atom (i.e. ionisation) effectively nullifying the process, recombination proceeds over a long period gradually progressing with the expansion and cooling of the Universe. Once the photons were decoupled from matter, and as the Universe became almost wholly composed of neutral atoms, the photons were able to travel freely through the Universe without interacting with matter. CMB is not just a blackbody radiation that follows Planck's Law, it must inevitably also contain the recombination lines emitted during cosmological recombination. As such these spectral lines are a storehouse of information about the early thermal history of the Universe, and these lines are often called 'probes' into the past.

So the Epoch of Recombination (EoR) started ca. 370,000 years after the Big Bang. In very simple terms, initially the Universe was practically homogeneous, but as the Universe expanded the temperature of matter and radiation dropped. When the temperature dropped to about 3,000 K charged electrons and protons (hydrogen nuclei) were able to combine (bound) to form electrically neutral hydrogen atoms.
EoR is often presented as a watershed in the history of the Universe, because up to that moment in time everything had been dominated by dark matter, with baryonic matter (e.g. protons and neutrons) playing a marginal role. The EoR marks the transition to an era in which the role of the intergalactic medium (IGM) in the formation and evolution of structures became dominant.
IGM is a kind of rarefied plasma (most ionised hydrogen) with a density that can be between 5 and 200 times that of the average density of the Universe. The IGM is ionised because the temperature high enough to cause bound electrons to escape from hydrogen nuclei, and it's for this reason that it is also often today called the warm-hot intergalactic medium (i.e. >100,000 K is defined as 'warm'). It is thought that perhaps that up to half of all atomic matter in the Universe might exist in this warm-hot, rarefied state. So this warm-hot intergalactic medium is today home to galactic filamentary structures. The rarefied plasma that becomes part of galaxy clusters at the intersections of these filamentary structures can heat up even more (between 10 and 100 million K), and becomes known as the intracluster medium. As a superheated plasma the intracluster medium becomes an emitter of X-ray radiation which provides information on the temperature, density and metallicity of the plasma.

The 21 cm fluctuations are sensitive to the density, temperature, and ionised fraction of IGM. Studying the 21-cm tomography tells us about the physics of IGM gas and structure formation during the EoR (Madau et al. 1997; Tozzi et al. 2000; Ciardi & Madau 2003; Furlanetto et al. 2004), and several 21-cm experiments are recently designed and built (e.g. MWA1, LOFAR2, SKA3).
On large scales (l
100), the 21-cm fluctuations cross-correlate with the CMB Doppler temperature anisotropies which are due to the motions of ionised baryons (Alvarez et al. 2006; Adshead & Furlanetto 2007).

As reionisation proceeds, the coupling of CMB photons and free electrons by Thomson scattering becomes strong again. As a result, Thomson scattering during reionisation produces secondary CMB temperature anisotropy and polarisation.
In the CMB temperature, the main generation mechanisms at the EoR are the Doppler effect for first order anisotropic fluctuations While the former is dominant on large scales (l < 1000), the latter dominates on small scales (l > 1000). In the following,

The transition of the Universe from ionized to neutral state during the Epoch of Recombination at about 400,000 years cosmic time was followed by what we call the `Dark Ages'. We cannot hope to see the Universe during this age since there was no little interaction between matter and radiation, and, however, matter and radiation were in thermal equilibrium. But at about 10 million years the density became sufficiently low so that the matter cooled away from the radiation and from that time the gas is potentially observable.

Around 100 million years after the
Big Bang, the initial density perturbations in the Universe are expected to have grown sufficiently to collapse and form structures: The first stars? The first ultradwarf galaxies? These might have lit up the whole Universe - the `Cosmic Dawn'. All this changed the temperature of matter, including hydrogen, and the populations of its electrons amongst its internal energy states, and the visibility of the gas evolves over cosmic time. The formation of first sources of radiation also resulted in the neutral Universe being reionized and we usher in the Epoch of Reionization. By 550 million years after the Big Bang, the Universe was completely reionized and it still remains so!

However, the rather long history is not without questions. There are plenty unresolved issues at these epochs of the Early
Universe - in particular during the times when the history depends on complex astrophysics of star and galaxy formation and radiative escape from these first objects. We specifically focus on two of them, namely the Epochs of Recombination and Reionization. Can we actually study the thermal history of the Universe through observations? Were there any non standard processes during the Epoch of Recombination? How were the first stars formed? How different were they from present ones? How did they heat and ionize the Universe again? We aim to get answers to these by detecting signals of cosmological origins through our dedicated experiments: APSERa and SARAS

Hydrogen is the most common species in the
Universe, and whilst it is a very simple structure, a proton and an electron, it hides a richness of physical properties. So let's have a look at some basic physics to do with the 21-cm hydrogen line. This line

Because direct
recombination to the ground state of hydrogen is very inefficient, the hydrogen atoms usually formed excited states of hydrogen, followed quickly by a transition to lower energy states by emitting photons.

Atomic Emission Hydrogen Spectrum

Above we have a simple representation of the Bohr model for the hydrogen spectral series (there are two rarer series called the Humphreys Series and the unnamed 'Further Series'). The spectral lines are due to electrons making transitions between two energy levels in the atom. Spectral lines are important in astronomical spectroscopy for detecting the presence of hydrogen and for calculating redshift. The 21-cm hydrogen line falls within the microwave region and is used in radio astronomy because it can penetrate large clouds of interstellar dust that are opaque to visible light (and can also easily pass through the Earth's atmosphere and be detected). The 21-cm hydrogen line was first detected in 1951 and since then the rotation curve of our galaxy has been calculated so that distances to anywhere in the Milky Way can now be calculated. The 21-cm hydrogen line is also the only way to probe the 'Dark Age' between recombination and reionization. Factoring in the redshift the 21-cm line is now observed at frequencies from 200 MHz to about 10 MHz on Earth. This permits the intensity mapping of the early large-scale structure of the Universe. It also provides a picture of how the Universe was re-ionised, because re-ionised hydrogen appears as holes in the 21-cm background.

At the Recombination Epoch, the cosmic microwave background (CMB) filled the Universe with a red, uniformly bright glow of black-body radiation. But later the temperature dropped and the cosmic microwave background shifted to the infrared. To human eyes, the Universe would then have appeared completely dark.
The idea is that
electrons combined with protons to form hydrogen atoms, resulting in a sudden drop in free electron density. The photons produced

when different types of
particles fell out of thermal equilibrium with each other (the so-called 'decoupling').

A long period of time would have lapsed before the first stars started to emit radiation that was not part of the CMB.

After recombination and decoupling, the
Universe was transparent and had cooled enough to allow light to travel long distances, but there were no light-producing structures such as stars and galaxies. Stars and galaxies are formed when dense regions of gas form due to the action of gravity, and this takes a long time within a near-uniform density of gas and on the scale required, so it is estimated that stars did not exist for perhaps hundreds of millions of years after recombination.

This period, known as the Dark Ages, began around 370,000 years after the Big Bang. During the Dark Ages, the temperature of the Universe cooled from some 4000 K to about 60 K (3727 °C to about −213 °C), and only two sources of photons existed: the photons released during recombination/decoupling (as neutral hydrogen atoms formed), which we can still detect today as the cosmic microwave background (CMB), and photons occasionally released by neutral hydrogen atoms, known as the 21 cm spin line of neutral hydrogen. The hydrogen spin line is in the microwave range of frequencies, and within 3 million years,[citation needed] the CMB photons had redshifted out of visible light to infrared; from that time until the first stars, there were no visible light photons. Other than perhaps some rare statistical anomalies, the Universe was truly dark.
The first generation of stars, known as Population III stars, formed within a few hundred million years after the Big Bang.[48] These stars were the first source of visible light in the Universe after recombination. structures may have begun to emerge from around 150 million years, and early galaxies emerged from around 380 to 700 million years. (We do not have separate observations of very early individual stars; the earliest observed stars are discovered as participants in very early galaxies.) As they emerged, the Dark Age gradually ended. Because this process was gradual, the Dark Ages only fully ended around 1 billion years, as the Universe took its present appearance.
There is also currently an observational effort underway to detect the faint 21 cm spin line radiation, as it is in principle an even more powerful tool than the cosmic microwave background for studying the early Universe.

Because direct recombinations to the ground state (lowest energy) of hydrogen are very inefficient, these hydrogen atoms generally form with the electrons in a high energy state, and the electrons quickly transition to their low energy state by emitting photons. Two main pathways exist: from the 2p state by emitting a Lyman-a photon - these photons will almost always be reabsorbed by another hydrogen atom in its ground state - or from the 2s state by emitting two photons, which is very slow.
This production of photons is known as decoupling, which leads to recombination sometimes being called photon decoupling, but recombination and photon decoupling are distinct events. Once photons decoupled from matter, they traveled freely through the universe without interacting with matter and constitute what is observed today as cosmic microwave background radiation (in that sense, the cosmic background radiation is infrared [and some red] black-body radiation emitted when the universe was at a temperature of some 3000 K, redshifted by a factor of 1100 from the visible spectrum to the microwave spectrum).

As photons did not interact with these electrically neutral atoms, the former began to travel freely through space, resulting in the decoupling of matter and radiation.[14]
The color temperature of the ensemble of decoupled photons has continued to diminish ever since; now down to 2.7260±0.0013 K,[4] it will continue to drop as the universe expands. The intensity of the radiation also corresponds to black-body radiation at 2.726 K because red-shifted black-body radiation is just like black-body radiation at a lower temperature. According to the Big Bang model, the radiation from the sky we measure today comes from a spherical surface called the surface of last scattering. This represents the set of locations in space at which the decoupling event is estimated to have occurred[15] and at a point in time such that the photons from that distance have just reached observers. Most of the radiation energy in the universe is in the cosmic microwave background,[16] making up a fraction of roughly 6×10−5 of the total density of the universe.[17]
Two of the greatest successes of the Big Bang theory are its prediction of the almost perfect black body spectrum and its detailed prediction of the anisotropies in the cosmic microwave background. The CMB spectrum has become the most precisely measured black body spectrum in nature.[7]
Density of energy for CMB is 0.25 eV/cm3[18] (4.005×10−14 J/m3) or (400–500 photons/cm3[19]).

Cosmic 'Dark Age'

Between ca. 370,000 years and ca. 1 billion years after the Big Bang
temperatures from 10 K (?? MeV) to 10 K (?? MeV)

Interestingly the 'language' of
astronomy is starlight, but starlight has not always been a feature of the Universe. In Big Bang cosmology, there was a period when the Universe was utterly dark. The 'Dark Age' is the period between the time when the cosmic microwave background (CMB) radiation was emitted and the time when the evolution of structures in the Universe led to the gravitational collapse of objects and the formation of the first stars.

Timeline of the Universe

This is an artists impression of the timeline of the Universe, starting about 13.7 billion years ago. We have the results from the Cosmic Background Explorer (COBE) and the initial investigation of the cosmic microwave background. Then we have the 'first light' of the Universe from the Spitzer Space Telescope. Originally this light would have been in the visible and ultraviolet spectrum, but then stretched (redshifted) to lower-energy infrared wavelengths.

The above description hides an enormous difficulty in understanding how to connect the very small perturbations seen in the
temperature maps of the cosmic microwave background (CMB) with the large non-linear structures that will then collapse and cool to form halos and filaments (and continue to collapse and cool to form stars and then galaxies).

So how did a nearly featureless
Universe (represented by the CMB) evolve into the lumpy cosmic web with a non-linear distribution of matter in massive galaxy clusters, interconnected through a pronounced and intricate filigree of filamentary features?

Cosmic Web Slug Nebula

To the left we have a dark matter simulation of how matter is distributed in a cosmic web of filaments that are thought to connect galaxies together. In the zoomed-in, high-resolution part (10 million light-years across) we can see how both diffuse gas and dark matter are distributed. The idea is that galaxies are embedded in a cosmic web of matter, most of which (about 84%) is invisible dark matter. It is thought that gravity causes ordinary matter to follow the distribution of dark matter, so filaments of diffuse, ionised gas are expected to trace a pattern similar to that seen in this dark matter simulation. What we see here is that by chance a quasar is illuminating a nebula (hydrogen gas) and making it glow. The glow is the emission of ultraviolet light known as Lyman-alpha radiation. This light is then 'stretched' from an invisible ultraviolet wavelength to a visible shade of violet by the time it reaches Earth.
To the right we have the so-called Slug Nebula which is a 2 million
light-year-long thread of diffuse hydrogen stretching between galaxies, and again it is the Lyman-alpha radiation that highlights the nebula's asymmetric, filamentary structure.

It is possible to hide the detail in
a quick summary. About 370,000 years after the Big Bang the Universe's density decreased enough so that the temperature fell below 3000 K, allowing ions and electrons to (re)combine into neutral hydrogen and helium. Immediately afterwards, photons decoupled from baryons (e.g. protons and neutrons) and the Universe became transparent, leaving a relic signature known as the cosmic microwave background (CMB). This event ushered the Universe into a period called the Dark Age. The Dark Age ended about 400 million years later, when the first galaxies formed and started emitting ionising radiation. Initially during the 'Epoch of Reionization', the intergalactic medium was neutral except in regions surrounding the first objects. However, as the reionization progressed, an evolving patchwork of neutral and ionised hydrogen regions unfolded. After a sufficient number of UV-radiation emitting objects formed, the temperature and the ionised fraction of the gas in Universe increased rapidly until eventually the ionised regions permeated to fill the whole Universe.

I'm going to have to break for a moment to introduce some special terminology used by astronomers. Hydrogen (H) is the lightest chemical element and is the most abundant chemical substance in the Universe. In a simplified periodic table chemical elements are presented as seen below.

Atomic and Mass Numbers

Hydrogen has three main isotopes. The first is a stable isotope called the hydrogen atom (1H), or atomic hydrogen, which contains one proton and one orbital electron, but no neutrons (so its sometime called protium). It is this atomic hydrogen that makes up 75% of the baryonic mass of the Universe, but on Earth isolated hydrogen atoms are extremely rare. On Earth hydrogen atoms tend to combine with other atoms in chemical compounds (e.g. in water with a molecular formula of H2O), or with a second hydrogen atom to form ordinary (diatomic molecule) hydrogen gas H2. Often atomic hydrogen and the hydrogen atom are confused. Hydrogen atoms combine with other atoms, whereas atomic hydrogen should be used to describe isolated individual atoms of hydrogen in a free state, and often people speak of hydrogen when they mean hydrogen gas in the Earth's atmosphere. Then there is a stable isotope called Deuterium (D is the valid chemical symbol, but you see also references to 2H, or heavy hydrogen, or hydrogen-2), and this contains one proton and one neutron, and one orbital electron. Finally there is the rare and radioactive (so unstable) isotope called Tritium (T, or 3H, or hydrogen-3), which contains one proton, two neutrons, and one orbital electron. Tritium decays into helium-3 by beta decay releasing 18.6 keV in the process (the half-life is about 12.3 years).
The reason we have made this little detour here is that
astronomers talk of H 𝐼 regions and H 𝐼𝐼 regions (i.e. often presented with the Roman numerals 𝐼 and 𝐼𝐼). A H 𝐼 region is an interstellar cloud in the interstellar medium, and consists of neutral atomic hydrogen (which they indicate as H 𝐼 or H𝐼 or even just HI), along with some other elements such as helium. Astronomers also talk of molecular clouds which contain molecular hydrogen, i.e. hydrogen gas H2. Where it starts to get really confusing is that astronomers also talk of the H 𝐼𝐼 region, which is a region of interstellar atomic hydrogen that is ionised. The idea is that it defines a cloud of partially ionised gas in which star formation has recently taken place. So for astronomers H 𝐼 is for neutral atomic hydrogen, and H 𝐼𝐼 is a singly-ionised hydrogen atom, which would often be presented as H+ in other branches of the sciences (and as a free proton in physics). Naturally a doubly-ionised atom such as O++ would be presented at O 𝐼𝐼𝐼 (or just OIII). There can also be some confusion when people speak of H2 or H 𝐼𝐼.

Reionization in the Cosmic 'Dark Age'

Between ca. Yyy years and ca. Yyy billion years after the Big Bang
temperatures from 10 K (?? MeV) to 10 K (?? MeV)

recombination and decoupling, the Dark Age started with an 'Epoch of Reionization', which is simply the ionisation of the intergalactic medium by the first stars. The epoch ended when all the atoms in the intergalactic medium had been re-ionised, and it represented the last major transition of hydrogen in the history of the Universe. So we can see that understanding the 'Epoch of Reionization' is vitally important because it is tightly related to the creation of those first stars and the later evolution of all cosmological structures.

As an outsider (me) reading through the different 'accessible' publications, etc. it's very difficult to determine what the latest 'guess' is on how the first stars and galaxies were produced, and which measurement campaigns have been completed and which are still to be launched. The description below is my best effort in extracting that latest 'guess', and I have unashamedly copied large chunks from a 2018 review article by Pratika Dayal and Andrea Ferrara.

The modern history of galaxy formation theory began immediately after WW II. In fact, the term 'galaxy' came into play only after the pivotal discovery, by Edwin Hubble in 1925, that the objects until then known as 'nebulae', since their original discovery by Charles Messier (French, 1730-1817) towards the end of the 18th century, were indeed of extra-galactic origin. It was then quickly realised that these systems were germane to understanding our own galaxy, the Milky Way. The years immediately after blossomed with key cosmological discoveries. For example, in 1929 Hubble completed his study to prove cosmic expansion and, soon after, in 1933, Fritz Zwicky (Swiss, 1898-1874) suggested that the dominant fraction of mass constituting galaxy clusters was unseen, laying the ground for the concept of 'dark matter'. As early as 1934, one of the first modelling attempts proposed galaxies to have formed out of primordial gas whose condensation process could be followed by applying viscous hydrodynamic equations. Similar pioneering attempts were made by Carl Friedrich von Weizsäcker (German, 1912-2007) who proposed that galaxies fragmented out of turbulent, expanding primordial gas.
In addition,
Fred Hoyle (English, 1915-2001) had already pointed out that the spin of a proto-galaxy (a primeval galaxy) could arise from the tidal field of its neighbouring structures. A few years later, Hoyle produced the first results emphasising the role of gas radiative cooling, fragmentation (a parent cloud breaking apart as it collapses) and the Jeans length (the critical radius where thermal energy is just stable against gravitational collapse) during the collapse of a primordial cloud. These were used to explain the observed masses of galaxies and the presence of galaxy clusters. The limitation of these early models was that they attempted to discuss the formation of galaxies independently of a cosmological framework, and included a number of ad-hoc assumptions.

Tides on Earth are caused by the combining effects of the gravitational forces exerted by the Moon and Sun, so it's not surprising that tides (or tidal fields) can also affect the cosmic web. Tides can affect the collapsing of dark matter halos and the resulting galaxy formation inside them. Whilst the Universe maybe homogeneous and isotropic on a large scale, gravitational tidal forces play a significant role in the stretching and compressing of matter. And when this deformation of matter picks out a preferred direction, it introduces anisotropy (think of the shape of a rugby ball as compared to the shape of a football which would result from an isotropic tidal field). So the idea is that galaxies interact (collide) when their gravitational fields produces disturbances in one another. Because of the extremely tenuous distribution of matter in galaxies a collision is a gravitational interaction and not a 'car accident'. These interactions can be quite diverse, ranging from a spiral arm being distorted through to the merging of two galaxies. Einstein's theory of general relativity suggested that ripples in spacetime were possible. In 2016 gravitational waves (ripples) were observed from a collision of two black holes. In 2019 it was further suggested that tidal forces are a hidden form of low-frequency gravitational waves. For example the Moon's gravitational forces cannot exceed the speed of light in reaching Earth (they take about 1.3 seconds), so those gravitational forces are a form of gravitational radiation that has there same characteristics as that detected from the collision of the two black holes, albeit on a much smaller scale.

Radiative cooling is a generic expression covering all situations where a body loses heat by thermal radiation, and as such covers such mundane topics as domestic passive cooling systems as well as the more exotic cooling processes in primordial plasma. In this second example the idea is that ions, atoms, and molecules can be excited and then decay radiatively, and that the radiative losses result in the hot gas cooling down. In addition, the observation of emission or absorption lines can act as a diagnostic tool and provide information of the density, temperature and radiation field of the gas. The primary cooling mechanisms for a gas of >10,000 K (so a plasma) include:-
Collision excitation - where a free electron impact raises a bound electron to excited state, emitting a photon when it then decays.
Collision ionisation - Electrons can excite bound electrons within atoms and ions through collisions, but they may not have sufficient energy to ionise most species or to populate significantly the first excited state. Energy is taken from the free electron, and the excited electrons in the atom can be collisionally de-excited, or they may decay radiatively back to the lower energy level.
Recombination radiation, where a free electron recombines with an ion, and the free electron's kinetic energy is radiated away.
Free emission is where a free electron is accelerated by an ion, and emits as photon (i.e. bremsstrahlung).
Each of these processes are to one degree or another proportional to the temperature, times the electron number density, times the number density of the ionic species. And at higher densities, the formation of H2 molecules is an additional source of cooling.
There are many other radiative heating and cooling processes that are important in astrophysics, for example:-
Metal-line cooling
Compton cooling
Synchrotron cooling
Molecular-line cooling
Molecular formation cooling
Compton heating by X-rays or gamma-rays
Cosmic ray heating

Soon after,
Dennis Sciama (English, 1926-1999) reiterated the point already made by George Gamow, while working on the idea of an initially hot and dense initial phase of the Universe (the hot Big Bang) that would lead to primordial nucleosynthesis, that the galaxy formation problem must be tied to the cosmological one. As a result of the initially hot state of the Universe, Gamow also predicted the existence of a background of thermal radiation (now known as the cosmic microwave background) with a temperature of a few Kelvin. At that time a steady-state model was under serious consideration. This proposed there was a continual creation of matter enabled the large-scale structure of the Universe to be independent of time despite its expansion. Sciama applied the previous ideas of galaxy formation in that context. His idea was that the gravitational fields of pre-existing galaxies could produce density concentrations in the intergalactic gas resulting in the formation of new galaxies due to gravitational instability.

Today gravitational instability is seen as one of the key ideas in explaining how structures evolve in the Universe. The basic idea is that small irregularities in the early distribution of matter exerted stronger or weaker gravitational forces making some areas gradually more or less dense. Those denser areas would increase their gravitational attraction and pull in more matter so the small irregularities would increase with time.

The situation changed dramatically with the experimental discovery of the cosmic microwave background in 1965, which inextricably tied cosmic evolution to galaxy formation. The stage was set for the two scenarios of galaxy formation that survived until the end of the last century. These were the “top-down” scenario, in which small galaxies are formed by fragmentation of larger units, and the “bottom-up” scenario, where large galaxies are hierarchically assembled from smaller systems. Enormous theoretical efforts were undertaken in the 1970’s to discriminate between these two contrasting paradigms of galaxy formation.

The idea that
galaxies emerged out of fluctuations of the primordial density field was gradually becoming more accepted, notwithstanding the unknown origin and amplitude of the perturbations. The next step was then to clarify the details of the non-linear stages of the growth of these fluctuations. The starting point was the idea that matter would collapse into a disk-like structure (popularly known as a “Zeldovich pancake”) before developing a complex shock structure and eventually fragmenting into lower-mass structures. The next step was the idea that after an initial expansion phase, the perturbation would “turn around”, and start to contract. Mass accretion onto the initial seed perturbations was found to be of key importance in forming fully-fledged galaxies. The overall result was a strong support for the bottom-up scenario where larger mass objects form from the nonlinear interaction of smaller masses. This “Press-Schechter formalism” has become the underlying paradigm of galaxy formation theory. The next step was to integrate a description of baryonic physics, which proved to be a key ingredient in predicting dissipative processes, star formation and ultimately the actual appearance of observed galaxies.

In 1978 it was time to combine the
Press-Schechter theory of gravitational clustering with both gas dynamics and dark matter. In practice, the new model consisted of two stages. Initially dark matter condensed into halos via pure gravitational collapse, after which baryons collapsed into the pre-existing potential wells, dissipating their energy via gas dynamical processes. As a result, the final galaxy properties were partly determined by the assembly of the parent dark matter halo. The model provided a first prediction of the luminosity function (the number of stars and galaxies in a given luminosity interval) that was roughly consistent with the scarce data available at that time. This class of two-staged galaxy formation models has become known as “semi-analytical models”, and have become standard tools in the galaxy formation field.

At approximately the same time, measurements on 10 spiral
galaxies clearly indicating the need for a dark matter halo to explain the observed rotation curves.
using the Balmer-alpha (or Hα) line at 6562 A

However, one crucial element, necessary for completing a coherent framework for galaxy formation, was still missing. This was a precise knowledge of the primordial density fluctuation power spectrum.

Indeed, the powerful two-stage approach by White and Rees [20] was still plagued by this limitation. In their study, these authors assumed a simple power-law dependence of the root mean square (r.m.s.) amplitude of the perturbation on
mass as σ M−α, with α essentially being an unknown parameter. Fortunately, soon after Guth [26] and Sato [27] independently proposed the basic ideas of inflation [for a review see, e.g., 28]. They noted that, despite the (assumed) highly homogeneous state of the Universe immediately after the Big Bang, regions separated by more than 1.8 degrees on the sky today should never have been causally connected (the horizon problem). In addition, the initial value of the Hubble constant must be fine-tuned to an extraordinary accuracy to produce a Universe as flat as the one we see today (flatness problem). According to the original proposal (also known as “old inflation”) de-Sitter inflation occurred as a first-order transition to true vacuum. However, a key flaw in this model was that the Universe would become inhomogeneous by bubble collisions soon after the end of inflation. To overcome the problem, Linde [29] and Albrecht and Steinhardt [30], instead, proposed the concept of slow-roll inflation with a second-order transition to true vacuum. Unfortunately, this scenario also suffers from the fine-tuning problem of the time required in a false vacuum to lead to a sufficient amount of infla- tion. Linde [31] later considered a variant version of the slow-roll inflation called chaotic inflation, in which the initial conditions of scalar fields are chaotic. Chaotic inflation has the advantage of not requiring an initial thermal equilibrium state. Rather, it can start out close to the Planck density, thereby solving the problem of the initial conditions faced by other inflationary models.

Most importantly, in the context of galaxy formation, inflation offered precise predictions for the origin of the primordial fluctuations and their power spectrum. Bardeen et al. [32] showed that fluctuations in the scalar field driving inflation (the “inflaton”) are created on quantum scales and expanded to large scales, eventually giving rise to an almost scale-free spectrum of gaussian adiabatic density perturbations. This is the Harrison-Zeldovich spectrum where the initial density perturbations can be expressed as P(k) kns with the spectral index ns ≈ 1. Quantum fluctuations are typically frozen by the accelerating expansion when the scales of fluctuations leave the Hubble radius. Long after inflation ends, the scales cross and enter the Hubble radius again. Indeed, the perturbations imprinted on the Hubble patch during inflation are thought to be the origin of the large-scale structure in the Universe. In fact temperature anisotropies, first observed by the Cosmic Background Explorer (COBE) satellite in 1992, exhibit a nearly scale-invariant spectrum as predicted by the inflationary paradigm - these have been confirmed to a high degree of precision by the subsequent Wilkinson Microwave Anisotropy Probe (WMAP) and Planck experiments. As σ M−(3+ns)/6 for the inflationary spectrum, the value of α in the White and Rees [20] theory could now be uniquely determined.
The final step was to compute the linear evolution of the primordial density field up to the recombination epoch. The function describing the modification of the initial spectrum, due to the differential growth of perturbations on different scales and at different epochs, is called the “transfer function” and was computed for dark matter models made by massive (GeV-TeV) particles, collectively known as Cold Dark Matter (CDM) models, by Peebles [33], Bardeen et al. [34] and later improved by the addition of baryons by Sugiyama [35].

Based on the above theoretical advances and exploiting the early availability of computers, it was possible to envision, for the first time, the possibility of simulating, ab-initio, the process of structure formation and evolution through cosmic time. This second class of galaxy formation models is referred to as “cosmological simulations” which is another major avenue of research in the field today. More technical details will be given in a following dedicated section (Sec. 4.2). Here, it suffices to stress how these theoretical and technological advances have revolutionised our view of how galaxies form. In the first attempt, Navarro and Benz [36] simulated the dynamical evolution of both collisionless dark matter particles and a dissipative baryonic component in a flat Universe in order to investigate the formation process of the luminous components of galaxies at the center of galactic dark halos. They assumed gaussian initial density fluctuations with a power spectrum of the form P (k) k−1 meant to reproduce the slope of the power spectrum in the range of scales relevant for galaxies. Although the available computing power for their smoothed-particle hydrodynamic (SPH scheme; for details see Sec. 4.2) allowed only the outrageously small number of about 7000 particles (compared to the tens of billions used in modern computations) the emergence of the “cosmic web”, made up of filaments, knots and voids, was evident.

Although the general scenario has now been established, many fundamental problems remain open, in particular those concerning the formation of the first
galaxies. This area represents the current frontier of our knowledge. As such it will be central for the next decade with the advent of many observational facilities that will allow us to peer into these most remote epochs of the Universe.

The Epoch of Reionization (EoR) represents the last major phase transition of hydrogen in the history of the Universe. Its beginning is marked by the appearance of the first stars and galaxies, whose Lyman continuum photons (with energy E > 13.6eV) gradually ionize the neutral hydrogen (H I ) in the intergalactic medium (IGM). The growing ionized bubbles around galaxies merge and expand until the IGM is completely ionized by z 6. A rising number of high-redshift galaxy observations are providing us with increasing hints on the properties and numbers of star-formation driven ionizing sources. These galaxy data-sets are complemented by (upper limits on) the 21cm emission from H I in the IGM during reionization obtained by experiments.
Despite this progress, the reionization topology, the properties of the ionizing sources and the impact of reionization on the evolution of galaxy properties through radiative feedback effects remain key outstanding questions in the field of physical cosmology (for a review see e.g. Dayal & Ferrara 2018).
As the IGM becomes ionized, the associated ultra-violet background (UVB) photo-heats the gas in halos and the IGM to about
104K. The higher temperature and rising pressure of the gas in a halo causes a fraction of the gas to photo-evaporate into the IGM (Barkana & Loeb 1999; Shapiro et al. 2004) and raises the Jeans mass for galaxy for- mation (reducing the amount of gas being accreted; Couch- man & Rees 1986; Efstathiou 1992; Hoeft et al. 2006). Both mechanisms lead to a reduction of gas mass and the as- sociated star formation rate, particularly in low-mass ha- los. However, modelling the impact of reionization feedback on galaxy formation remains challenging due to its complex dependence on halo mass and redshift, the patchiness and strength of the UVB and the redshift at which an assem- bling halo is first irradiated by the UVB (e.g. Gnedin 2000; Sobacchi & Mesinger 2013a).
Early works have studied the effects of radiative (pho- toheating) feedback on galaxies in cosmological hydrody- namical simulations by quantifying the loss of baryons in low-mass halos in the presence of a homogeneous UVB (e.g. Hoeft et al. 2006; Okamoto et al. 2008; Naoz et al. 2013). However, since reionization is a spatially inhomogeneous and temporal extended process, an increasing number of radia- tion hydrodynamical simulations have studied the impact of an inhomogeneous and evolving UVB on the galaxy popula- tion and found a reduction in the star formation rates in low- mass galaxies with halo mass M
h 􏰁 109 M (Gnedin 2000; Hasegawa & Semelin 2013; Gnedin & Kaurov 2014; Pawlik et al. 2015; Ocvirk et al. 2016, 2018; Katz et al. 2019; Wu et al. 2019). Most importantly, a number of such radiation hydrodynamical simulations show that the star-formation- suppressing effect of radiative increases with time, even af- ter the Universe has been mostly ionized (Gnedin & Kaurov 2014; Ocvirk et al. 2016, 2018; Wu et al. 2019), which could be attributed to a decrease in self-shielding and a slower heating of the gas (Wu et al. 2019). The suppression of star formation is also found to be dependent on the environment, i.e. galaxies in over-dense regions that ionize earlier feature higher star formation rates which declines sharply after lo- cal reionization for low-mass halos with Mh < 109 M (Da- woodbhoy et al. 2018). Highlighting the interplay between galaxy formation and reionization, Wu et al. (2019) have shown that a stronger stellar feedback reduces the star for- mation within the galaxy and hence the UVB, weakening the strength of radiative feedback. In order to investigate the signatures of radiative feedback on the ionization topol- ogy, a number of works have combined N-body simulations with radiative transfer and used different suppression mod- els for the ionizing emissivities of low-mass halos (e.g. Iliev et al. 2007, 2012; Dixon et al. 2016). However, since these simulations do not contain a galaxy evolution model, the gas mass in halos below the local Jeans mass of the photo-heated IGM is instantaneously suppressed in ionized regions.

As an outsider (me) reading through the different 'accessible' publications, etc. it's very difficult to determine what the latest 'guess' is on how the first stars and galaxies were produced, and which measurement campaigned have been completed and which are still to be launched. The description below is my best effort in extracting that latest 'guess'.

At very high z (redshift), the Universe was practically homogeneous, and the temperature of matter and radiation dropped as the Universe expanded. Atoms formed at z1100 when the temperature was 3000 K, a low enough value for the plasma to recombine. In this Recombination Epoch, the CMB filled the Universe with a red, uniformly bright glow of black-body radiation, but later the temperature dropped and the CMB shifted to the infrared. To human eyes, the Universe would then have appeared as a completely dark place. A long period of time had to pass until the first objects collapsed forming the first stars that shone in the Universe with the first light ever emitted that was not part of the CMB (Fig. 1). The period of time between the last scattering of the CMB radiation by the homogeneous plasma and the formation of the first star has come to be known as the Dark Age of the Universe.

Observations provide detailed information on the state of the
Universe when the CMB radiation was last scattered at z1100, and galaxies and quasars have been observed up to z6.5. The theory suggests that the first stars and galaxies should have formed substantially earlier, so experts expect to discover galaxies at progressively higher z as technology advances and fainter objects are detected. However, beyond a z of 10 to 20, the Cold Dark Matter (CDM) theory with Gaussian fluctuations predicts that the dark matter halos that can host luminous objects become extremely rare, even for low-mass halos. So discovering any objects at z20 should become exceedingly difficult as we reach the period of the Dark Age. During the Dark Age, before the collapse of any objects, not much was happening at all. The atomic gas was still close to homogeneous, and only a tiny fraction of it formed the first molecules of hydrogen (H2), hydrogen-2 (deuterium), and Lithium Hydride (LiH) as the temperature cooled down. ???? One of the few suggested ideas for an observational probe of the Dark Age is to detect secondary anisotropies on the CMB that were imprinted by Li atoms as they recombined at z400 through the resonance line at 670.8 nm, which would be redshifted to the far-infrared today, making it difficult to observe because of the foreground emission by dust.

Dark Age ended when the first stars were formed. In order to form stars, the atomic gas must be able to follow the collapse of dark matter halos. This happens when the halo mass is above the Jeans mass of the gas at the virialized temperature and density of the intergalactic medium, a condition that is fulfilled when the virialized temperature was 100 K. In halos with lower temperatures, the gas pressure is sufficient to prevent the gas from collapsing. In addition, there must be a radiative cooling mechanism for the gas to lose its energy and concentrate to ever-higher densities in the halo centres until stellar densities are reached. Without radiative cooling, the gas reaches hydrostatic equilibrium in the halo after the gravitational collapse and stays at a fixed density without forming stars. The ability of the gas to cool depends on the virialized temperature and the chemical composition of the gas. The virialized temperature would have been low for the first objects formed, but then it would have increased rapidly with time.

You may have noticed the expression "virialized temperature", but what does that mean? This is a reference to the Virial Theorem, which was suggested in 1870 by Rudolf Clausius (German, 1822-1888). He had already stated the basic ideas of the second law of thermodynamics and introduced the concept of entropy, and in the Virial Theorem he noted that for a steady-state or quasi-steady state of a system the total kinetic energy is equal to all the forces acting on it, or ½ the average potential energy of the system, i.e. the average kinetic energy is equal to ½ the average potential energy (or the potential energy is twice the total energy and the kinetic energy is the negative of the total energy). We need to remember that at that time gases were not through of in terms of their internal and kinetic energy (statistical mechanics was yet to be developed), and therefore heat and classical mechanics were regarded as two distinct disciplines. It was in this context that Clausius introduced the new term 'virias' (plural of vis meaning forces), and the 'virial' represented the scalar of ½ of all the forces acting on a system. Others would go on to generalise the idea, and Fritz Zwicky (Swiss, 1898-1874) would be the first to use the theorem to deduce the existence of dark matter.
I'm not sure but I think the usefulness of the
Virial Theorem is that the time average of properties such as kinetic energy and potential energy is equal to the average over the entire space (i.e. space and time averages are equal), and that dynamic systems that have evolved for some time 'forget' their initial states.
virialized systems do not gain or lose mass, that most of the systems energy is not being converted into heat, and that the systems are stable over long periods of time. So our solar system and our Milky Way are both virialized, but our observable Universe is not because it keeps gaining mass and energy as our cosmic horizon expands. In very (very) simple terms when the theorem was applied to galactic clusters the gravitational mass computed was vasty greater than the measured mass derived from the amount of light generated by the stars in the galaxies. The different is ascribed to dark matter.
So what does a
"virialized temperature" actually mean? In many situations it is desirable to translate the mass of a star cluster as predicted, to a temperature which can be directly observed. They define a "virial radius" and then a "virial mass" within that radius, and define a virial temperature as proportional to (mass) for the same radius. I've read that this applies to high-mass clusters, but that a proportionality of (mass)∼2 is more appropriate for low-mass clusters.

You may also have noted a reference to Jeans mass. It was James Jeans (English, 1877-1946) who showed that gravitational collapse of a molecular cloud (a type of interstellar cloud) would occur if the gaseous pressure was too weak to balance the force of gravity. So the Jeans mass is that critical mass as a function of density and temperature. The idea is that the gas pressure, caused by the thermal movement of the atoms or molecules comprising the cloud try to make the cloud expand. Whereas gravity tries to make the cloud collapse. The Jeans mass is the critical mass where both forces are in equilibrium.

The primordial
gas in the first halos was mainly composed of atomic hydrogen and helium. Atomic hydrogen induces radiative cooling only when the virialized temperature was >10,000 K. When collisions can excite and ionise atomic hydrogen the gas can then readily contract to form galaxies. In the intermediate virialized temperature range from 100 K to 10,000 K, the gas settles into halos but radiative cooling is not available and, in the absence of the heavy elements that were formed only after massive stars ejected their synthesised nuclei into space, the only available coolant is dihydrogen (H2). Because two hydrogen atoms cannot form a molecule by colliding and emitting a photon, only a small fraction of the gas in these first objects could become dihydrogen via reactions involving the species H􏱚- and H2􏱖+ formed by the residual free electrons and protons left over from the early Universe. This limited the rate at which the gas could cool. Simulations have shown that the first stars form in halos with virialized temperatures 2,000 K and 􏱗106 solar masses (M). At lower temperatures, the rotational transitions of dihydrogen do not provide sufficient cooling for the gas to dissipate its energy. The slow cooling in these first objects leads to the formation of a central core of 100 to 1,000 solar masses (M) of gas cooled to 􏱗200 K, and this core may form a massive star.
As soon as the first
stars appeared, they changed the environment in which they were formed, affecting the formation of subsequent stars. Massive stars emit a large fraction of their light as photons that can ionise atomic hydrogen (with energies greater than 13.6 eV), creating H 𝐈𝐈 regions (H+ or free protons) and heating the gas to 10,000 K. While these ionising photons are all absorbed at the H 𝐈𝐈 region boundaries, in the vicinity of the stars that emit them, photons with lower energies can travel greater distances through the atomic medium and reach other halos. Ultraviolet photons with energies above 11 eV can photodissociate dihydrogen, and this can suppress the cooling rate and the ability to form stars in low-mass halos that are cooling by dihydrogen when they are illuminated by the first stars. Such effects might imply that the first massive stars formed through the radiative cooling of dihydrogen were a short-lived and self-destructive generation, because their own light might destroy the molecules that made their formation possible.
When some of these massive
stars end their lives in supernovae, they eject heavy elements that pollute the Universe with the ingredients necessary to form dust and planets. In a halos containing 106 solar masses (M) of gas, the photoionisation and supernova explosions from only a few massive stars can expel all the gas from the potential well of the halos. For example, the energy of 10 supernovae (about 1052 ergs) is enough to accelerate 1 million solar masses (M) of gas to a speed of 30 km/s􏱚, which will push the gas out of any halos with a much lower velocity dispersion. The expelled gas can later fall back as a more massive object is formed by mergers of pre-existing dark matter halos. The next generation of stars can form by cooling provided by heavy elements, or by neutral atomic hydrogen when the virialized temperature is >10,000 K. Abundances of heavy elements as low as 1000 times smaller than that of the Sun can increase the cooling rate over that provided by dihydrogen and can also cool the gas to much lower temperatures than possible with dihydrogen alone, reducing the Jeans mass and allowing for the formation of low-mass stars.

The most important effect that the formation of stars had on their environment is the reionization of the gas in the universe. Even though the
baryonic matter combined into atoms at z 􏱕 1100, the intergalactic matter must have been reionised before the present. The evidence comes from observations of the spectra of quasars. Quasars are extremely luminous objects found in the nuclei of galaxies that are powered by the accretion of matter on massive black holes. Because of their high luminosity, they are used by cosmologists as lamp posts allowing accurate spectra to be obtained, in which the analysis of absorption lines provides information on the state of the intervening intergalactic matter. The spectra of quasars show the presence of light at wavelengths shorter than the Lyman-alpha emission line of H. If the intergalactic medium is atomic, then any photons emitted at wavelengths shorter than Ly􏱥 (121.6 nm) would be scattered by H at some point on their journey to us, when their wavelength is redshifted to the Ly􏱥 line. The mean density of H in the universe, when it is all in atomic form, is enough to provide a scattering optical depth as large as 􏱗105. The suppression of the flux at wavelengths shorter than the Ly􏱥 emission line is called the Gunn-Peterson trough.

In quasars at z 􏱞 6, the Gunn-Peterson trough is not observed. Instead, one sees the flux partially absorbed by what is known as the Ly􏱥 forest: a large number of absorption lines of different strength along the spectrum (Fig. 4). The H atoms in the intergalactic medium producing this absorption are a small fraction of all of the H, which is in photoionization equilibrium with a cosmic ionizing background pro
duced by galaxies and quasars (58). The absorption lines cor- respond to variations in the density of the intergalactic matter. The observation that a measurable fraction of Ly􏰐 flux is trans- mitted through the universe implies that, after z 􏰄 6, the entire universe had been reionized.
However, recent- ly discovered qua- sars (
19, 59, 60) show a complete Gunn- Peterson trough start- ingatz􏰀6(Fig.4).
Although the lack of transmission does not automatically imply that the intervening medi- um is atomic (because the optical depth of the
atomic medium at mean density is
􏰂10 , and so
even an atomic fraction as low as 10
􏰅 3 produc- es an optical depth of 􏰂100, which implies an undetectable transmission fraction), analysis of the Ly􏰐 spectra in quasars at z 􏰉 6 (61, 62) indicates that the intensity of the cosmic ioniz- ing background increased abruptly at z 􏰀 6. The reason for the increase has to do with the way in which reionization occurred. Ionizing photons in the far-ultraviolet have a short mean free path through atomic gas in the universe, so they are generally absorbed as soon as they reach any region in which the gas is mostly atomic. Initially, when the first stars and qua- sars were formed, the ionizing photons they emitted were absorbed in the high-density gas of the halos hosting the sources. The interga- lactic medium started to be reionized when sufficiently powerful sources could ionize all

the gas in their own halos, allowing ionizing photons to escape. The reionization then proceeded by the expansion of ionization fronts around the sources (Fig. 5), separating the universe into ionized bubbles and an atomic medium between the bubbles. The ionized bubbles grew and overlapped, until every low-density region of the universe was reionized; this moment defines the end of the reionization period. High-density regions that do not contain a luminous internal source can remain atomic because the gas in them recombines sufficiently fast, and they can self-shield against the external radiation. When the ionized bubbles over- lap, photons are free to travel for distances much larger than the size of a bubble before being absorbed, and the increase in the mean free path implies a similar increase in the back- ground intensity. The exact way in which the background intensity should increase at the end of reionization, depending on the luminosity function and spatial distribution of the sources,
has not yet been predicted by theoretical models of reionization [e.g., (64)], but a rapid increase in the mean free path should, if present, tell us the time at which the reionization of the low- density intergalactic medium was completed.
The observational pursuit of the reion- ization epoch may be helped by the optical afterglows of gamma-ray bursts, which can shine for a few minutes with a flux that is larger than even the most luminous quasars (
6570), probably due to beaming of the radiation. Because gamma-ray bursts may be produced by the death of a massive star, they can occur even in the lowest-mass halos forming at the earliest times, with fixed luminosities. Among other things, the absorption spectra of gamma-ray burst op- tical afterglows might reveal the damped Ly􏰐 absorption profile of the H in the intervening atomic medium (68) and ab- sorption lines produced by neutral oxygen (which can be present in the atomic medium

only, before reionization) ejected by massive stars.
Electron Scattering of the CMB by the Reionized Universe Reionization made most of the electrons in the universe free of their atomic binding, and able to scatter the CMB photons again. Before recombination at z 􏱙 1100, the universe was opaque, but because of the large factor by which the universe expanded from recombination to the reionization epoch, the electron Thompson scattering optical depth produced by the intergalactic medium after reionization, 􏱧e, is low. If the universe had reionized suddenly at z 􏱙 6, then 􏱧e 􏱕 0.03. Because the fraction of matter that is ionized must increase gradually, from the time the first stars were formed to the end of reionization at z 􏱙 6, 􏱧e must include the contribution from the partially ionized medium at z 􏱝 6, and it must therefore be greater than 0.03.

duced by galaxies and quasars (58). The absorption lines cor- respond to variations in the density of the intergalactic matter. The observation that a measurable fraction of Ly􏱥 flux is trans- mitted through the universe implies that, after z 􏱙 6, the entire universe had been reionized.

The sooner reioniza- tion started, the larg- er the value of
􏱧e .
The WMAP mis- sion has measured
􏱧e from the power spec- trum of the polariza- tion and temperature fluctuations of the CMB. A model-inde- pendent measurement from the polarization- temperature correla- tion gives 􏱧e 􏱙 0.16 􏱨 0.04 (73), but a fit to the CDM model with six free parameters us- ing both the correla- tion of temperature and polarization fluc- tuations found by
WMAP, and other data gives 􏰒e 􏰄 0.17 􏰓 0.06
13). An optical depth as large as 􏰒e 􏰄 0.16 is
surprising because it implies that a large fraction
of the matter in the universe was reionized as
early as
z 􏰀 17, when halos with mass as low as
7 M could collapse only from 3􏰆 peaks, and J
were therefore still very rare (Fig. 3). The errors on
􏰒e will need to be reduced before we can assign a high degree of confidence to its high value (74 ).
What are the implications of a high
􏰒e if it is confirmed? Measurements of the emission rate at z 􏰀 4 from the Ly􏰐 forest show that to obtain 􏰒e 􏰈 0.1, the emission rate would need to increase with z (75), and a large increase is requireduptoz􏰀17toreach􏰒e􏰄0.16.In view of the smaller mass fraction in collapsed halos at this high z, it is clear that a large increase in the ionizing radiation emitted per unit mass is required from z 􏰔 6 to 17. Models have been proposed to account for an early reionization, based on a high emission

However, recent- ly discovered qua- sars (
19, 59, 60) show a complete Gunn- Peterson trough start- ingatz􏱕6(Fig.4). Although the lack of transmission does not automatically imply that the intervening medi- um is atomic (because the optical depth of the atomic medium at mean density is 􏱗105, and so even an atomic fraction as low as 10􏱚3 produc- es an optical depth of 􏱗100, which implies an undetectable transmission fraction), analysis of the Ly􏱥 spectra in quasars at z 􏱞 6 (61, 62) indicates that the intensity of the cosmic ioniz- ing background increased abruptly at z 􏱕 6. The reason for the increase has to do with the way in which reionization occurred. Ionizing photons in the far-ultraviolet have a short mean free path through atomic gas in the universe, so they are generally absorbed as soon as they reach any region in which the gas is mostly atomic. Initially, when the first stars and qua- sars were formed, the ionizing photons they emitted were absorbed in the high-density gas of the halos hosting the sources. The interga- lactic medium started to be reionized when sufficiently powerful sources could ionize all
has not yet been predicted by theoretical models of reionization [e.g., (
64)], but a rapid increase in the mean free path should, if present, tell us the time at which the reionization of the low- density intergalactic medium was completed.
The observational pursuit of the reion- ization epoch may be helped by the optical afterglows of gamma-ray bursts, which can shine for a few minutes with a flux that is larger than even the most luminous quasars (
6570), probably due to beaming of the radiation. Because gamma-ray bursts may be produced by the death of a massive star, they can occur even in the lowest-mass halos forming at the earliest times, with fixed luminosities. Among other things, the absorption spectra of gamma-ray burst op- tical afterglows might reveal the damped Ly􏱥 absorption profile of the H in the intervening atomic medium (68) and ab- sorption lines produced by neutral oxygen (which can be present in the atomic medium
WMAP, and other data gives
􏱧e 􏱙 0.17 􏱨 0.06 (13). An optical depth as large as 􏱧e 􏱙 0.16 is surprising because it implies that a large fraction of the matter in the universe was reionized as early as z 􏱕 17, when halos with mass as low as 107 MJ could collapse only from 3􏱛 peaks, and were therefore still very rare (Fig. 3). The errors on 􏱧e will need to be reduced before we can assign a high degree of confidence to its high value (74 ).
What are the implications of a high
􏱧e if it is confirmed? Measurements of the emission rate at z 􏱕 4 from the Ly􏱥 forest show that to obtain 􏱧e 􏱝 0.1, the emission rate would need to increase with z (75), and a large increase is requireduptoz􏱕17toreach􏱧e􏱙0.16.In view of the smaller mass fraction in collapsed halos at this high z, it is clear that a large increase in the ionizing radiation emitted per unit mass is required from z 􏱩 6 to 17. Models have been proposed to account for an early reionization, based on a high emission
efficiency at high
z (7684). A possible rea- son for this high efficiency is that if the first stars that formed with no heavy elements were all massive (34, 36), they would have emitted as many as 105 ionizing photons per baryon in stars (85), many more than emitted by observed stellar populations (8689). It is not clear, however, if enough of these mas- sive stars can form in the first low-mass halos at z 􏱝 17, once the feedback effects of ultra- violet emission and supernovae ( 37, 38, 45) are taken into account. A different possibility might be that more objects than expected were forming at high z due to a fundamental change in the now favored CDM model.
The Future: the 21-cm Signature of the Atomic Medium Many of the observational signatures of the epoch of reionization probe regions of the uni- verse where stars have already formed and the medium has been reionized or polluted by heavy elements. But there is a way to study the undis- turbed atomic medium. Nature turns out to be surprisingly resourceful in providing us with opportunities to scrutinize the most remote land- scapes of the universe. The hyperfine structure of H atoms, the 21-cm transition due to the spin interaction of the electron and the proton, pro- vides a mechanism to probe the atomic medium. When observing the CMB radiation, the inter- vening H can change the intensity at the red- shifted 21-cm wavelength by a small amount, causing absorption if its spin temperature is lower than the CMB temperature, and emission if the spin temperature is higher. The spin tem-perature reflects the fraction of atoms in the ground and the excited hyperfine levels. The gas kinetic temperature cooled below the CMB tem- perature during the Dark Age owing to adiabatic expansion, although the spin temperature was kept close to the CMB temperature (90). When the first stars appeared in the universe, a mech- anism for coupling the spin and kinetic temper- ature of the gas, and hence for lowering the spin temperature and making the H visible in absorp- tion against the CMB, started to operate. The ultraviolet photons emitted by stars that pene- trated the atomic medium were repeatedly scat- tered by H atoms after being redshifted to the Ly􏱥 resonance line, and these scatterings redis- tributed the occupation of the hyperfine struc- ture levels (9093), bringing the spin temperature down to the kinetic temperature and causing absorption. As the first generation of stars evolved, supernova remnants and x-ray binaries probably emitted x-rays that penetrated into the intergalactic medium and heated it by photoion- ization; gas at high density could also be shock-heated when collapsing into halos. The gas ki- netic and spin temperatures could then be raised above the CMB, making the 21-cm signal ob- servable in emission (93, 94 ). This 21-cm signal should reveal an intricate angular and frequency structure reflecting the density and spin temperature variations in the atomic medium (95100). Several radio observatories will be attempting to detect the signal (101).
The observation of the 21-cm signal on the CMB will be a challenge, because of the long wavelength and the faintness of the signal. However the potential for the future is enormous: detailed information on the state of density fluctuations of the atomic medi- um at the epoch when the first stars were forming and the spin temperature variations that were induced by the ultraviolet and x-ray emission from the first sources are both encoded in the fine ripples of the CMB at its longest wavelengths.

Let's try to visualise the three steps in this challenge.

This cosmic microwave background (CMB) decoupled from the cosmic gas 400,000 years after the Big Bang when the Universe cooled sufficiently for protons and electrons to combine to form neutral hydrogen. Radiation from this time is able to reach us directly, providing a snapshot of the primordial Universe.
Our understanding of structure is based upon the observation of small perturbations in the temperature maps of the CMB. These indicate that the early Universe was inhomogeneous at the level of 1 part in 100,000. Over time the action of gravity causes the growth of these small perturbations into larger non-linear structures, which collapse to form sheets, filaments, and halos. These non-linear structures provide the framework within which galaxies form via the collapse and cooling of gas until the density required for star formation is reached.
This review focuses on an alternative approach based upon making observations of the red-shifted 21 cm line of neutral hydrogen. This 21 cm line is produced by the hyperfine splitting caused by the interaction between electron and proton magnetic moments. Hydrogen is ubiquitous in the Universe, amounting to 75% of the gas mass present in the intergalactic medium (IGM). As such, it provides a convenient tracer of the properties of that gas and of major milestones in the first billion years of the Universe’s history.

What does an "
intricate filigree of filamentary features" really look like? In 2014 a team created a 3-D map of a large region of the cosmic web, dating back to when the Universe was just a quarter of its current age (i.e. 11 billion years ago). I will go into a little detail because it such a great mix of observation and modelling, and it's a bit like making a CT-scan of the Universe (below is a verbatim description from project description).
Firstly the cosmic web is difficult to see and astronomers usually resort to indirect detection with the help of quasars. Quasars are the nuclei of distant galaxies which, powered by the infall of matter onto a supermassive black hole, shine as the brightest objects in the Universe. As their light travels towards Earth, it encounters the rarefied cosmic gas filling the void between galaxies, and some of the light is absorbed.
Between stars and galaxies, etc. there is this "rarefied cosmic gas" which I guess is some mixture of intergalactic space, interstellar medium, intracluster medium, warm-hot intergalactic medium, cosmic dust, intergalactic dust, etc.
Crucially, this
absorption occurs at very specific frequencies. When astronomers on Earth split a quasar's light, rainbow-like, into its different component colours, the absorption creates a characteristic pattern in the spectrum, narrow darker regions which astronomers call absorption lines. The pattern and shape of these lines allow astronomers to study the distribution of the absorbing gas.
Since our Universe is expanding, the location of a gas cloud's absorption lines within the spectrum depends on the cloud's distance from Earth ("cosmological redshift"). This is because the quasar light stretches as it travels toward Earth, thus as the light passes through various gas clouds the absorption signature is imprinted on a different region of the quasar's spectrum. The quasar light thus bears the imprint of all the clouds it encountered; for each cloud, the position of the absorption line in the spectrum contains information about the cloud's distance from Earth. The most prominent of these absorption patterns is caused by the Lyman-alpha absorption line of hydrogen gas. Collectively, the pattern of Lyman-alpha lines, each associated with a different cloud, is known as the "Lyman-alpha forest".
In fact Lyman-alpha blobs (LAB's) are huge concentrations of gas emitting the Lyman-alpha emission line, a spectral line of hydrogen.

But bright
quasars are very rare, and hence sparse on the sky. In consequence, the sight-lines where the cosmic web can be traced using quasar light are so few and far between that they do not provide nearly enough information to construct a three-dimensional map.

Absorption Lines

For this reason, astronomers considered using the combined starlight of distant galaxies as an alternative light source. Galaxies are nearly 100 times more numerous than quasars. If they could be used as backlights for absorption studies, this would enable a high-fidelity 3D tomographic mapping of cosmic structure – similar to computer tomography (CT) methods used in medical imaging. The snag was that galaxies are about 15 times fainter than quasars. The prevailing view was that they were simply not bright enough for the experiment to be feasible, but on the contrary, this kind of map is feasible with existing telescopes and instruments.

Gaseous Filaments

We should remember that the use of the Lyman-alpha forest implies that the intergalactic medium has already undergone reionization into a plasma and that the natural gas (neutral atomic hydrogen-1) only exists in small clouds. If you push back before the reionization you find that the oldest quasars have absorption regions in front of them indicating that the intergalactic medium was at that time a neutral gas (the absorption regions disappear when the intergalactic medium is re-ionised). Despite the fact that quasars produce intense ionising ultraviolet radiation, they are not considered the primary source of reionization. Stars (Population III stars) and dwarf galaxies are considered the most likely to caused the reionization.

the history of Hydrogen clouds over the last 5 billion years or so. In
mainstream cosmology Quasars are at cosmological distances and Hydrogen clouds
were one of the first structures to form in the universe. As the light from the quasars
 passes through these Hydrogen clouds the atoms absorb certain frequencies of light
and a ‘black line’ appears on the spectra. With re
dshift, these lines become longer and
longer in wavelength and, as the light passes through successive clouds, a whole
forest of lines is built up. At any epoch in time the average spacing of the Hydrogen
clouds should be the same and so in a static universe we would expect to find the
same average spacing across the whole range of redshifts. In an expanding universe,
the clouds will become further and further apart and so the average spacing will  
increase as the redshift reduces. If we ask the question, how long ago were the
Hydrogen clouds touching, we find that this was at a redshift of 29 or 46x10
three times the accepted age of the universe. If we ask the question “at what
redshift/time the clouds were on average at atomic spacing” we find that
 this was at a
redshift of z = 4x10
 or 6x10
 times the presently accepted age!
Either the universe is much older than the presently accepted value in order for these
clouds to have achieved their present separations or there must be some
‘new physics’
(other than inflation) required for the Big Bang Theory. The question remains, if
quasars are at cosmic distances and Lyα lines do represent Hydrogen cloud
separation, then why in an expanding universe are they, locally and on average,
equally spaced over a range of redshifts?

The period of
reionization started with the ionising radiation from the first stars, and ended when all the atoms in the intergalactic medium (IGM) had been re-ionised.
This intergalactic medium (IGM) is itself a bit of an enigma since it is the matter that resides in the void between galaxies. Initially the intergalactic reservoir was even larger than today, simply because only a few massive galactic halos had coalesced from it. As one writer put it "the intergalactic medium is the trough from which galaxies feed", and as such it sets the minimum mass for galaxies (possibly around 10 million times the mass of our Sun). IMG can be used to measure the total ionising photon production of galaxies, as well as galaxies' stellar nucleosynthetic yield, since a large fraction is ejected into the IMG. Many of the physical processes at play in the IGM are simplified versions of those encountered in the interstellar medium (ISM).

Galaxy Group EGS77 Ionized Bubbles

Above we have an impression of bubbles of hydrogen gas ionised by the stars in three galaxies in the galaxy cluster EGS77 (these galaxies are examples of what are called Lyman-break galaxies).

Above we what we are 'seeing' is the earliest moment when the first generation of
stars formed started to re-ionise the hydrogen gas that permeated the Universe. This happened only ca. 680 million years after the Big Bang. Elementary particles had combined to form neutral hydrogen, and after ca. 500,000 years after the Big Bang stars started to form. The problem is that the young Universe filled with hydrogen atoms attenuated ultraviolet light blocking our view of the early galaxies. But once stars started to clump together in early galaxies the intense radiation from the newly formed blue stars ionised the surrounding hydrogen gas, forming bubbles that allow the light to escape and travel freely towards the Earth without too much attenuation. Eventually, bubbles like these grow around all galaxies and filled intergalactic space, clearing the way for light to travel across the Universe. This moment represents what is called the "cosmic dawn", an intermediate state between a neutral and ionised Universe.
All three galaxies showed strong emission lines of hydrogen Lyman-alpha at a redshift (z = 7.7), which is the equivalent of 680 million years after the Big Bang. The size of the ionised bubbles (about 2.2 million light-years across) were big enough that the Lyman alpha photons were redshifted before reaching the bubble boundary, and thus they could escape unscathed, and later be detected on Earth. Lyman alpha galaxies provide a powerful tool to probe the central phase of reionization where the neutral fraction of gas is about 50%, because Lyman alpha photons scatter resonantly. The basic idea is to use the intergalactic medium (IGM) as a kind of cosmological probe since it can provide information about the large scale structure of the Universe.
Theodore Lyman IV (American, 1874-1954) discovered the Lyman series, which is a hydrogen spectral series. The Lyman-alpha line (technically it's a doublet) is emitted when an electron falls from the first excited shell to the ground state. In hydrogen the emission has a wavelength of about 121 nm, which is part of the vacuum ultraviolet that is absorbed by air. Usually this would be measured by satellite-borne instruments, but for extremely distant sources the redshift is such that the hydrogen spectral line penetrates the atmosphere and can be detected by group-based instruments.

The cosmic
Dark Age did not just stop, but there must have been a moment when the very first galaxies were formed. Scientists use a radio telescope to look for a signal from neutral hydrogen. There was a point when ionised hydrogen atoms were finally able to capture electrons and become neutral atoms. The belief is that these neutral hydrogen atoms were the dominant element during the cosmic Dark Age. As they clumped together to form the first stars, they were re-ionised (in the so-called 'Epoch of Reionization'). We know that neutral hydrogen emits radiation from the hyper-fine level tradition, at a wavelength of 21 cm. So today the key is to search for the redshifted 21 cm signal. The problem is that the signal is hidden in an overwhelmingly bright galactic and extragalactic foreground, 4 to 5 orders of magnitude stronger. And even a simple FM radio signal can be reflected off a plane fuselage and swamp the system. So the technique involved weeding out all signal contaminants other than from the primal hydrogen. To date no signal from neutral hydrogen has been found down to a redshift of 6.5.

Sky surveys and mappings of the various wavelength bands of electromagnetic radiation (in particular 21-cm emission) have yielded much information on the content and character of the universe's structure. The organization of structure appears to follow as a hierarchical model with organization up to the scale of superclusters and filaments. Larger than this (at scales between 30 and 200 megaparsecs[53]), there seems to be no continued structure, a phenomenon that has been referred to as the End of Greatness.[54]
Walls, filaments, nodes, and voids[edit]

DTFE reconstruction of the inner parts of the 2dF Galaxy Redshift Survey
The organization of structure arguably begins at the stellar level, though most cosmologists rarely address astrophysics on that scale. Stars are organized into galaxies, which in turn form galaxy groups, galaxy clusters, superclusters, sheets, walls and filaments, which are separated by immense voids, creating a vast foam-like structure[55] sometimes called the "cosmic web". Prior to 1989, it was commonly assumed that virialized galaxy clusters were the largest structures in existence, and that they were distributed more or less uniformly throughout the universe in every direction. However, since the early 1980s, more and more structures have been discovered. In 1983, Adrian Webster identified the Webster LQG, a large quasar group consisting of 5 quasars. The discovery was the first identification of a large-scale structure, and has expanded the information about the known grouping of matter in the universe.

Even at the very largest scale the Universe is not uniform, but forms a cosmic web, which is easily see in the way galaxies are distributed. This web contains massive galaxy clusters, interconnected through filaments (including sheets and walls), and separated by large voids.

Galaxy Epoch

Starting ca. 1 billion years after the Big Bang
temperatures from 10 K (?? MeV) to 10 K (?? MeV)

I kick off this section will another chunk from the 2018 review article by Pratika Dayal and Andrea Ferrara.

Over the last 20-30 years the foundations of physical cosmology have been both strengthening and refining, but at the same time it has introduced several genuine, irrefutable surprises.
Big Bang standard model has been strengthening and refining with the work on Hubble expansion, the cosmic microwave background (CMB) and the abundance of light elements (nucleosynthesis).

physical cosmology has also introduced several genuine, irrefutable surprises, namely:-
We live in a flat
Universe, corroborating predictions of inflationary theories
Roughly 85.3% of cosmic matter is constituted by some kind of, as yet unknown, dark matter particles
Hubble expansion is accelerating, possibly due to a non-clustering, negative-pressure fluid called dark energy
Black holes a billion times the mass of the Sun were already in place 1 billion years after the Big Bang
Distant galaxies, and products of their stellar activity, including Gamma-Ray Bursts (GRBs), have recently been detected out to redshifts z > 11, corresponding to a cosmic age shorter than 0.42 Gyr.

According to our
review article we don't have a complete, exact theory to explain these puzzling experimental evidences, and some large gaps remain in the basic cosmological scenario.

However, what is now widely accepted is that the
Universe underwent an inflationary phase early on that sourced the nearly scale-invariant (things that don't change with scaling) primordial density perturbations that have resulted in the large-scale structure we observe today. However, inflation requires non-standard physics and, as of now, there is no consensus on the mechanism that made the Universe inflate. Moreover, observations allow only few constraints on the numerous inflationary models available. This inflationary expansion forced