Wiki source for TopThirtyProblemsWithTheBigBang


Show raw source

======Top Thirty Problems with the Big Bang======
by [[TomVanFlandern Tom Van Flandern]] from [[http://metaresearch.org/cosmology/BB-top-30.asp MetaResearch.org]]

//reprinted from Meta Research Bulletin 11, 6-13 (2002)//

Abstract. Earlier, we presented a simple list of the top ten problems with the Big Bang. [[1]] Since that publication, we have had many requests for citations and additional details, which we provide here. We also respond to a few rebuttal arguments to the earlier list. Then we supplement the list based on the last four years of developments – with another 20 problems for the theory.

1) Static universe models fit observational data better than expanding universe models.
- Static universe models match most observations with no adjustable parameters. The Big Bang can match each of the critical observations, but only with adjustable parameters, one of which (the cosmic deceleration parameter) requires mutually exclusive values to match different tests. [[2],[3]] Without ad hoc theorizing, this point alone falsifies the Big Bang. Even if the discrepancy could be explained, Occam’s razor favors the model with fewer adjustable parameters – the static universe model.

(2) The microwave “background” makes more sense as the limiting temperature of space heated by starlight than as the remnant of a fireball.
The expression “the temperature of space” is the title of chapter 13 of Sir Arthur Eddington’s famous 1926 work, [[4]] Eddington calculated the minimum temperature any body in space would cool to, given that it is immersed in the radiation of distant starlight. With no adjustable parameters, he obtained 3°K (later refined to 2.8°K [[5]]), essentially the same as the observed, so-called “background”, temperature. A similar calculation, although with less certain accuracy, applies to the limiting temperature of intergalactic space because of the radiation of galaxy light. [[6]] So the intergalactic matter is like a “fog”, and would therefore provide a simpler explanation for the microwave radiation, including its blackbody-shaped spectrum.

Such a fog also explains the otherwise troublesome ratio of infrared to radio intensities of radio galaxies. [[7]] The amount of radiation emitted by distant galaxies falls with increasing wavelengths, as expected if the longer wavelengths are scattered by the intergalactic medium. For example, the brightness ratio of radio galaxies at infrared and radio wavelengths changes with distance in a way which implies absorption. Basically, this means that the longer wavelengths are more easily absorbed by material between the galaxies. But then the microwave radiation (between the two wavelengths) should be absorbed by that medium too, and has no chance to reach us from such great distances, or to remain perfectly uniform while doing so. It must instead result from the radiation of microwaves from the intergalactic medium. This argument alone implies that the microwaves could not be coming directly to us from a distance beyond all the galaxies, and therefore that the Big Bang theory cannot be correct.

None of the predictions of the background temperature based on the Big Bang were close enough to qualify as successes, the worst being Gamow’s upward-revised estimate of 50°K made in 1961, just two years before the actual discovery. Clearly, without a realistic quantitative prediction, the Big Bang’s hypothetical “fireball” becomes indistinguishable from the natural minimum temperature of all cold matter in space. But none of the predictions, which ranged between 5°K and 50°K, matched observations. [[8]] And the Big Bang offers no explanation for the kind of intensity variations with wavelength seen in radio galaxies.

(3) Element abundance predictions using the Big Bang require too many adjustable parameters to make them work.
The universal abundances of most elements were predicted correctly by Hoyle in the context of the original Steady State cosmological model. This worked for all elements heavier than lithium. The Big Bang co-opted those results and concentrated on predicting the abundances of the light elements. Each such prediction requires at least one adjustable parameter unique to that element prediction. Often, it’s a question of figuring out why the element was either created or destroyed or both to some degree following the Big Bang. When you take away these degrees of freedom, no genuine prediction remains. The best the Big Bang can claim is consistency with observations using the various ad hoc models to explain the data for each light element. Examples: [[9],[10]] for helium-3; [[11]] for lithium-7; [[12]] for deuterium; [[13]] for beryllium; and [[14],[15]] for overviews. For a full discussion of an alternative origin of the light elements, see [[16]].

(4) The universe has too much large scale structure (interspersed “walls” and voids) to form in a time as short as 10-20 billion years.
The average speed of galaxies through space is a well-measured quantity. At those speeds, galaxies would require roughly the age of the universe to assemble into the largest structures (superclusters and walls) we see in space [[17]], and to clear all the voids between galaxy walls. But this assumes that the initial directions of motion are special, e.g., directed away from the centers of voids. To get around this problem, one must propose that galaxy speeds were initially much higher and have slowed due to some sort of “viscosity” of space. To form these structures by building up the needed motions through gravitational acceleration alone would take in excess of 100 billion years. [[18]]

(5) The average luminosity of quasars must decrease with time in just the right way so that their average apparent brightness is the same at all redshifts, which is exceedingly unlikely.
According to the Big Bang theory, a quasar at a redshift of 1 is roughly ten times as far away as one at a redshift of 0.1. (The redshift-distance relation is not quite linear, but this is a fair approximation.) If the two quasars were intrinsically similar, the high redshift one would be about 100 times fainter because of the inverse square law. But it is, on average, of comparable apparent brightness. This must be explained as quasars “evolving” their intrinsic properties so that they get smaller and fainter as the universe evolves. That way, the quasar at redshift 1 can be intrinsically 100 times brighter than the one at 0.1, explaining why they appear (on average) to be comparably bright. It isn’t as if the Big Bang has a reason why quasars should evolve in just this magical way. But that is required to explain the observations using the Big Bang interpretation of the redshift of quasars as a measure of cosmological distance. See [[19],[20]].

By contrast, the relation between apparent magnitude and distance for quasars is a simple, inverse-square law in alternative cosmologies. In [20], Arp shows great quantities of evidence that large quasar redshifts are a combination of a cosmological factor and an intrinsic factor, with the latter dominant in most cases. Most large quasar redshifts (e.g., z > 1) therefore have little correlation with distance. A grouping of 11 quasars close to NGC 1068, having nominal ejection patterns correlated with galaxy rotation, provides further strong evidence that quasar redshifts are intrinsic. [[21]]

(6) The ages of globular clusters appear older than the universe.
Even though the data have been stretched in the direction toward resolving this since the “top ten” list first appeared, the error bars on the Hubble age of the universe (12±2 Gyr) still do not quite overlap the error bars on the oldest globular clusters (16±2 Gyr). Astronomers have studied this for the past decade, but resist the “observational error” explanation because that would almost certainly push the Hubble age older (as Sandage has been arguing for years), which creates several new problems for the Big Bang. In other words, the cure is worse than the illness for the theory. In fact, a new, relatively bias-free observational technique has gone the opposite way, lowering the Hubble age estimate to 10 Gyr, making the discrepancy worse again. [[22],[23]]

(7) The local streaming motions of galaxies are too high for a finite universe that is supposed to be everywhere uniform.
In the early 1990s, we learned that the average redshift for galaxies of a given brightness differs on opposite sides of the sky. The Big Bang interprets this as the existence of a puzzling group flow of galaxies relative to the microwave radiation on scales of at least 130 Mpc. Earlier, the existence of this flow led to the hypothesis of a "Great Attractor" pulling all these galaxies in its direction. But in newer studies, no backside infall was found on the other side of the hypothetical feature. Instead, there is streaming on both sides of us out to 60-70 Mpc in a consistent direction relative to the microwave "background". The only Big Bang alternative to the apparent result of large-scale streaming of galaxies is that the microwave radiation is in motion relative to us. Either way, this result is trouble for the Big Bang. [[24],[25],[26],[27],[28]]

(8) Invisible dark matter of an unknown but non-baryonic nature must be the dominant ingredient of the entire universe.
The Big Bang requires sprinkling galaxies, clusters, superclusters, and the universe with ever-increasing amounts of this invisible, not-yet-detected “dark matter” to keep the theory viable. Overall, over 90% of the universe must be made of something we have never detected. By contrast, Milgrom’s model (the alternative to “dark matter”) provides a one-parameter explanation that works at all scales and requires no “dark matter” to exist at any scale. (I exclude the additional 50%-100% of invisible ordinary matter inferred to exist by, e.g., MACHO studies.) Some physicists don’t like modifying the law of gravity in this way, but a finite range for natural forces is a logical necessity (not just theory) spoken of since the 17th century. [[29],[30]]

Milgrom’s model requires nothing more than that. Milgrom’s is an operational model rather than one based on fundamentals. But it is consistent with more complete models invoking a finite range for gravity. So Milgrom’s model provides a basis to eliminate the need for “dark matter” in the universe at any scale. This represents one more Big Bang “fudge factor” no longer needed.

(9) The most distant galaxies in the Hubble Deep Field show insufficient evidence of evolution, with some of them having higher redshifts (z = 6-7) than the highest-redshift quasars.
The Big Bang requires that stars, quasars and galaxies in the early universe be “primitive”, meaning mostly metal-free, because it requires many generations of supernovae to build up metal content in stars. But the latest evidence suggests lots of metal in the “earliest” quasars and galaxies. [[31],[32],[33]] Moreover, we now have evidence for numerous ordinary galaxies in what the Big Bang expected to be the “dark age” of evolution of the universe, when the light of the few primitive galaxies in existence would be blocked from view by hydrogen clouds. [[34]]

(10) If the open universe we see today is extrapolated back near the beginning, the ratio of the actual density of matter in the universe to the critical density must differ from unity by just a part in 1059. Any larger deviation would result in a universe already collapsed on itself or already dissipated.
Inflation failed to achieve its goal when many observations went against it. To maintain consistency and salvage inflation, the Big Bang has now introduced two new adjustable parameters: (1) the cosmological constant, which has a major fine-tuning problem of its own because theory suggests it ought to be of order 10120, and observations suggest a value less than 1; and (2) “quintessence” or “dark energy”. [[35],[36]] This latter theoretical substance solves the fine-tuning problem by introducing invisible, undetectable energy sprinkled at will as needed throughout the universe to keep consistency between theory and observations. It can therefore be accurately described as “the ultimate fudge factor”.


Anyone doubting the Big Bang in its present form (which includes most astronomy-interested people outside the field of astronomy, according to one recent survey) would have good cause for that opinion and could easily defend such a position. This is a fundamentally different matter than proving the Big Bang did not happen, which would be proving a negative – something that is normally impossible. (E.g., we cannot prove that Santa Claus does not exist.) The Big Bang, much like the Santa Claus hypothesis, no longer makes testable predictions wherein proponents agree that a failure would falsify the hypothesis. Instead, the theory is continually amended to account for all new, unexpected discoveries. Indeed, many young scientists now think of this as a normal process in science! They forget or were never taught that a model has value only when it can predict new things that differentiate the model from chance and from other models before the new things are discovered. Explanations of new things are supposed to flow from the basic theory itself with at most an adjustable parameter or two, and not from add-on bits of new theory.

Of course, the literature also contains the occasional review paper in support of the Big Bang. [[37]] But these generally don’t count any of the prediction failures or surprises as theory failures as long as some ad hoc theory might explain them. And the “prediction successes” in almost every case do not distinguish the Big Bang from any of the four leading competitor models: Quasi-Steady-State [16,[38]], Plasma Cosmology [18], Meta Model [3], and Variable-Mass Cosmology [20].

For the most part, these four alternative cosmologies are ignored by astronomers. However, one web site by Ned Wright does try to advance counterarguments in defense of the Big Bang. [[39]] But his counterarguments are mostly old objections long since defeated. For example:
(1) In “Eddington did not predict the CMB”:
a. Wright argues that Eddington’s argument for the “temperature of space” applies at most to our Galaxy. But Eddington’s reasoning applies also to the temperature of intergalactic space, for which a minimum is set by the radiation of galaxy and quasar light. The original calculations half-a-century ago showed this limit probably fell in the range 1-6°K. [6] And that was before quasars were discovered and before we knew the modern space density of galaxies.
b. Wright also argues that dust grains cannot be the source of the blackbody microwave radiation because there are not enough of them to be opaque, as needed to produce a blackbody spectrum. However, opaqueness is required only in a finite universe. An infinite universe can achieve thermodynamic equilibrium (the actual requirement for a blackbody spectrum) even if transparent out to very large distances because the thermal mixing can occur on a much smaller scale than quantum particles – e.g., in the light-carrying medium itself.
c. Wright argues that dust grains do not radiate efficiently at millimeter wavelengths. However, efficient or not, if the equilibrium temperature they reach is 2.8°K, they must radiate away the energy they absorb from distant galaxy and quasar light at millimeter wavelengths. Temperature and wavelength are correlated for any bodies in thermal equilibrium.
(2) About Lerner’s argument against the Big Bang:
a. Lerner calculated that the Big Bang universe has not had enough time to form superclusters. Wright calculates that all the voids could be vacated and superclusters formed in less than 11-14 billion years (barely). But that assumes that almost all matter has initial speeds headed directly out of voids and toward matter concentrations. Lerner, on the other hand, assumed that the speeds had to be built up by gravitational attraction, which takes many times longer. Lerner’s point is more reasonable because doing it Wright’s way requires fine-tuning of initial conditions.
b. Wright argues that “there is certainly lots of evidence for dark matter.” The reality is that there is no credible observational detection of dark matter, so all the “evidence” is a matter of interpretation, depending on theoretical assumptions. For example, Milgrom’s Model explains all the same evidence without any need for dark matter.
(3) Regarding arguments against “tired light cosmology”:
a. Wright argues: “There is no known interaction that can degrade a photon's energy without also changing its momentum, which leads to a blurring of distant objects which is not observed.” While it is technically true that no such interaction has yet been discovered, reasonable non-Big-Bang cosmologies require the existence of entities many orders of magnitude smaller than photons. For example, the entity responsible for gravitational interactions has not yet been discovered. So the “fuzzy image” argument does not apply to realistic physical models in which all substance is infinitely divisible. By contrast, physical models lacking infinite divisibility have great difficulties explaining Zeno’s paradoxes – especially the extended paradox for matter. [3]
b. Wright argues that the stretching of supernovae light curves is not predicted by “tired light”. However, one cannot measure the stretching effect directly because the time under the lightcurve depends on the intrinsic brightness of the supernovae, which can vary considerably. So one must use indirect indicators, such as rise time only. And in that case, the data does not unambiguously favor either tired light or Big Bang models.
c. Wright argued that tired light does not produce a blackbody spectrum. But this is untrue if the entities producing the energy loss are many orders of magnitude smaller and more numerous than quantum particles.
d. Wright argues that tired light models fail the Tolman surface brightness test. This ignores that realistic tired light models must lose energy in the transverse direction, not just the longitudinal one, because light is a transverse wave. When this effect is considered, the predicted loss of light intensity goes with (1+z)-2, which is in good agreement with most observations without any adjustable parameters. [ NOTEREF _Ref4051228 \h \* MERGEFORMAT 2,[40]] The Big Bang, by contrast, predicts a (1+z)-4 dependence, and must therefore invoke special ad hoc evolution (different from that applicable to quasars) to close the gap between theory and observations.

By no means is this “top ten” list of Big Bang problems exhaustive – far from it. In fact, it is easy to argue that several of these additional 20 points should be among the “top ten”:
· "Pencil-beam surveys" show large-scale structure out to distances of more than 1 Gpc in both of two opposite directions from us. This appears as a succession of wall-like galaxy features at fairly regular intervals, the first of which, at about 130 Mpc distance, is called "The Great Wall". To date, 13 such evenly-spaced "walls" of galaxies have been found! [[41]] The Big Bang theory requires fairly uniform mixing on scales of distance larger than about 20 Mpc, so there apparently is far more large-scale structure in the universe than the Big Bang can explain.
· Many particles are seen with energies over 60x1018 eV. But that is the theoretical energy limit for anything traveling more than 20-50 Mpc because of interaction with microwave background photons. [[42]] However, this objection assumes the microwave radiation is as the Big Bang expects, instead of a relatively sparse, local phenomenon.
· The Big Bang predicts that equal amounts of matter and antimatter were created in the initial explosion. Matter dominates the present universe apparently because of some form of asymmetry, such as CP violation asymmetry, that caused most anti-matter to annihilate with matter, but left much matter. Experiments are searching for evidence of this asymmetry, so far without success. Other galaxies can’t be antimatter because that would create a matter-antimatter boundary with the intergalactic medium that would create gamma rays, which are not seen. [[43],[44]]
· Even a small amount of diffuse neutral hydrogen would produce a smooth absorbing trough shortward of a QSO’s Lyman-alpha emission line. This is called the Gunn-Peterson effect, and is rarely seen, implying that most hydrogen in the universe has been re-ionized. A hydrogen Gunn-Peterson trough is now predicted to be present at a redshift z » 6.1. [[45]] Observations of high-redshift quasars near z = 6 briefly appeared to confirm this prediction. However, a galaxy lensed by a foreground cluster has now been observed at z = 6.56, prior to the supposed reionization epoch and at a time when the Big Bang expects no galaxies to be visible yet. Moreover, if only a few galaxies had turned on by this early point, their emission would have been absorbed by the surrounding hydrogen gas, making these early galaxies invisible. [34] So the lensed galaxy observation falsifies this prediction and the theory it was based on. Another problem example: Quasar PG 0052+251 is at the core of a normal spiral galaxy. The host galaxy appears undisturbed by the quasar radiation, which, in the Big Bang, is supposed to be strong enough to ionize the intergalactic medium. [[46]]
· An excess of QSOs is observed around foreground clusters. Lensing amplification caused by foreground galaxies or clusters is too weak to explain this association between high- and low-redshift objects. This apparent contradiction has no solution under Big Bang premises that does not create some other problem. It particular, dark matter solutions would have to be centrally concentrated, contrary to observations that imply that dark matter increases away from galaxy centers. The high-redshift and low-redshift objects are probably actually at comparable distances, as Arp has maintained for 30 years. [[47]]
· The Big Bang violates the first law of thermodynamics, that energy cannot be either created or destroyed, by requiring that new space filled with “zero-point energy” be continually created between the galaxies. [[48]]
· In the Las Campanas redshift survey, statistical differences from homogenous distribution were found out to a scale of at least 200 Mpc. [[49]] This is consistent with other galaxy catalog analyses that show no trends toward homogeneity even on scales up to 1000 Mpc. [[50]] The Big Bang, of course, requires large-scale homogeneity. The Meta Model and other infinite-universe models expect fractal behavior at all scales. Observations remain in agreement with that.
· Elliptical galaxies supposedly bulge along the axis of the most recent galaxy merger. But the angular velocities of stars at different distances from the center are all different, making an elliptical shape formed in that way unstable. Such velocities would shear the elliptical shape until it was smoothed into a circular disk. Where are the galaxies in the process of being sheared?
· The polarization of radio emission rotates as it passes through magnetized extragalactic plasmas. Such Faraday rotations in quasars should increase (on average) with distance. If redshift indicates distance, then rotation and redshift should increase together. However, the mean Faraday rotation is less near z = 2 than near z = 1 (where quasars are apparently intrinsically brightest, according to Arp’s model). [[51]]
· If the dark matter needed by the Big Bang exists, microwave radiation fluctuations should have “acoustic peaks” on angular scales of 1° and 0.3°, with the latter prominent compared with the former. By contrast, if Milgrom’s alternative to dark matter (Modified Newtonian Dynamics) is correct, then the latter peak should be only about 20% of the former. Newly acquired data from the Boomerang balloon-borne instruments clearly favors the MOND interpretation over dark matter. [[52]]
· Redshifts are quantized for both galaxies [[53],[54]] and quasars [[55]]. So are other properties of galaxies. [[56]] This should not happen under Big Bang premises.
· The number density of optical quasars peaks at z = 2.5-3, and declines toward both lower and higher redshifts. At z = 5, it has dropped by a factor of about 20. This cannot be explained by dust extinction or survey incompleteness. The Big Bang predicts that quasars, the seeds of all galaxies, were most numerous at earliest epochs. [[57]]
· The falloff of the power spectrum at small scales can be used to determine the temperature of the intergalactic medium. It is typically inferred to be 20,000°K, but there is no evidence of evolution with redshift. Yet in the Big Bang, that temperature ought to adiabatically decrease as space expands everywhere. This is another indicator that the universe is not really expanding.] [[58]]
· Under Big Bang premises, the fine structure constant must vary with time. [[59]]
· Measurements of the two-point correlation function for optically selected galaxies follow an almost perfect power law over nearly three orders of magnitude in separation. However, this result disagrees with n-body simulations in all the Big Bang’s various modifications. A complex mixture of gravity, star formation, and dissipative hydrodynamics seems to be needed. [[60]]
· Emission lines for z > 4 quasars indicate higher-than-solar quasar metallicities. [[61]] The iron to magnesium ratio increases at higher redshifts (earlier Big Bang epochs). [[62]] These results imply substantial star formation at epochs preceding or concurrent with the QSO phenomenon, contrary to normal Big Bang scenarios.
· The absorption lines of damped Lyman-alpha systems are seen in quasars. However, the HST NICMOS spectrograph has searched to see these objects directly in the infrared, but failed for the most part to detect them. [[63]] Moreover, the relative abundances have surprising uniformity, unexplained in the Big Bang. [[64]] The simplest explanation is that the absorbers are in the quasar’s own environment, not at their redshift distance as the Big Bang requires.
· The luminosity evolution of brightest cluster galaxies (BGCs) cannot be adequately explained by a single evolutionary model. For example, BGCs with low x-ray luminosity are consistent with no evolution, while those with high x-ray luminosity are brighter on average at high redshift. [[65]]
· The fundamental question of why it is that at early cosmological times, bound aggregates of order 100,000 stars (globular clusters) were able to form remains unsolved in the Big Bang. It is no mystery in infinite universe models. [[66]]
· Blue galaxy counts show an excess of faint blue galaxies by a factor of 10 at magnitude 28. This implies that the volume of space is larger than in the Big Bang, where it should get smaller as one looks back in time. [[67]]

Perhaps never in the history of science has so much quality evidence accumulated against a model so widely accepted within a field. Even the most basic elements of the theory, the expansion of the universe and the fireball remnant radiation, remain interpretations with credible alternative explanations. One must wonder why, in this circumstance, that four good alternative models are not even being comparatively discussed by most astronomers.

Acknowledgments
Obviously, hundreds of professionals, both astronomers and scientists from other fields, have contributed to these findings, although few of them stand back and look at the bigger picture. It is hoped that many of them will add their comments and join as co-authors in an attempt to sway the upcoming generation of astronomers that the present cosmology is headed nowhere, and to join the search for better answers.

References
[[1]] T. Van Flandern (1997), MetaRes.Bull. 6, 64; <http://metaresearch.org>, “Cosmology” tab, “Cosmology” sub-tab.
[[2]] T. Van Flandern, “Did the universe have a beginning?”, Apeiron 2, 20-24 (1995); MetaRes.Bull. 3, 25-35 (1994); http://metaresearch.org, “Cosmology” tab, “Cosmology” sub-tab.
[[3]] T. Van Flandern (1999), Dark Matter, Missing Planets and New Comets, North Atlantic Books, Berkeley (2nd ed.).
[[4]] Sir Arthur Eddington (1926), “The temperature of space”, Internal constitution of the stars, Cambridge University Press, reprinted 1988, chapter 13.
[[5]] Regener (1933), Zeitschrift fur Physiks; confirmed by Nerost (1937).
[[6]] Finlay-Freundlich (1954).
[[7]] E.J. Lerner, (1990), “Radio absorption by the intergalactic medium”, Astrophys.J. 361, 63-68.
[[8]] T. Van Flandern, “Is the microwave radiation really from the big bang 'fireball'?”, Reflector (Astronomical League) XLV, 4 (1993); and MetaRes.Bull. 1, 17-21 (1992).
[[9]] (2002), Nature 415, vii & 27-29 & 54-57.
[[10]] (1997), Astrophys.J. 489, L119-L122.
[[11]] (2000), Science 290, 1257.
[[12]] (2000), Nature 405, 1009-1011 & 1025-1027.
[[13]] (2000), Science 290, 1257.
[[14]] (2002), Astrophys.J. 566, 252-260.
[[15]] (2001), Astrophys.J. 552, L1-L5.
[[16]] C.F. Hoyle, G. Burbidge, J.V. Narlikar (2000), A different approach to cosmology, Cambridge University Press, Cambridge, Chapter 9: “The origin of the light elements”.
[[17]] (2001), Science 291, 579-581.
[[18]] E.J. Lerner (1991), The Big Bang Never Happened, Random House, New York, pp. 23 & 28.
[[19]] T. Van Flandern (1992), “Quasars: near vs. far”, MetaRes.Bull. 1, 28-32; <http://metaresearch.org>, “Cosmology” tab, “Cosmology” sub-tab.
[[20]] H.C. Arp (1998), Seeing Red, Apeiron, Montreal.
[[21]] (2002), Astrophys.J. 566, 705-711.
[[22]] (1999), Nature 399, 539-541.
[[23]] (1999); Sky&Tel. 98 (Oct.), 20.
[[24]] D.S. Mathewson, V.L. Ford, & M. Buchhorn (1992), Astrophys.J. 389, L5-L8.
[[25]] D. Lindley (1992), Nature 356, 657.
[[26]] (1999), Astrophys.J. 512, L79-L82.
[[27]] (1993), Science 257, 1208-1210.
[[28]] (1996), Astrophys.J. 470, 49-55.
[[29]] T. Van Flandern (1996), “Possible new properties of gravity”, Astrophys.&SpaceSci. 244, 249-261; MetaRes.Bull. 5, 23-29 & 38-50; <http://metaresearch.org>, “Cosmology” tab, “Gravity” sub-tab.
[[30]] T. Van Flandern (2001), “Physics has its principles”, Redshift and Gravitation in a Relativistic Universe, K. Rudnicki, ed., Apeiron, Montreal, 95-108; MetaRes.Bull. 9, 1-9 (2000).
[[31]] (2001), Astron.J. 122, 2833-2849.
[[32]] (2001), Astron.J. 122, 2850-2857.
[[33]] (2002), Astrophys.J. 565, 50-62.
[[34]] (2002), <http://www.ifa.hawaii.edu/users/cowie/z6/z6_press.html>.
[[35]] (2000), Astrophys.J. 530, 17-35.
[[36]] (1999), Nature 398, 25-26.
[[37]] (2000), Science 290, 1923.
[[38]] (1999), Phys.Today Sept, 13, 15, 78.
[[39]] E.L. Wright (2000), <http://www.astro.ucla.edu/~wright/errors.html>.
[[40]] (2001), Astron.J. 122, 1084-1103.
[[41]] H. Kurki-Suonio (1990), Sci.News 137, 287.
[[42]] C. Seife (2000), “Fly’s Eye spies highs in cosmic rays’ demise”, Science 288, 1147.
[[43]] (2000), Sci.News 158, 86.
[[44]] (1997), Science 278, 226.
[[45]] (2000), Astrophys.J. 530, 1-16.
[[46]] (2002), <http://oposite.stsci.edu/pubinfo/PR/96/35/A.html>.
[[47]] (2000), Astrophys.J. 538, 1-10.
[[48]] B.R. Bligh (2000), The Big Bang Exploded!, <brbligh@hotmail.com>.
[[49]] (2000), Astrophys.J. 541, 519-526.
[[50]] (1999), Nature 397, 225.
[[51]] (1998), Seeing Red, H. Arp, Apeiron, Montreal, 124-125.
[[52]] McGaugh (2001), Astronomy 29#3, 24-26.
[[53]] (1992), Astrophys.J. 393, 59-67.
[[54]] Guthrie & Napier (1991), Mon.Not.Roy.Astr.Soc. 12/1 issue.
[[55]] (2001), Astron.J. 121, 21-30.
[[56]] (1999), Astron.&Astrophys. 343, 697-704.
[[57]] (2001), Astron.J. 121, 54-65.
[[58]] (2001), Astrophys.J. 557, 519-526.
[[59]] (2001), Phys.Rev.Lett. 9/03 issue.
[[60]] (2001), Astrophys.J. 558, L1-L4.
[[61]] (2002), Astrophys.J. 565, 50-62.
[[62]] (2002), Astrophys.J. 565, 63-77.
[[63]] (2002), Astrophys.J. 566, 51-67.
[[64]] (2002), Astrophys.J. 566, 68-92.
[[65]] (2002), Astrophys.J. 566, 103-122.
[[66]] (2002), Astrophys.J. 566, L1-L4.
[[67]] (1992), Nature 355, 55-58.
Valid XHTML :: Valid CSS: :: Powered by WikkaWiki