Monday 22 August 2011

Skydivephil: A Refutation, Part Two

Skydivephil, whose video on Kalam I have already refuted, has another video on the Fine-Tuning teleological argument: http://www.youtube.com/watch?v=rt-UIfkcgPY 

It managed to be even more egregiously bad, if you can imagine. This video is so bad, that I just couldn’t pass up the opportunity to smash it to pieces.

Claim One: The Fine-Tuning argument argues that because the world seems perfect for life, it must be finely tuned.

30 seconds or into the video and already we have our first straw man. It would probably help if you actually familiarised yourself with Craig’s works and arguments… since it seems painfully obvious that you have not. Afterall, you can hardly refute what somebody believes if you do not know what they actually believe. Simply assuming what other people believe… or worse, deliberately misrepresenting what they believe is not only fallacious but dishonest.

Let’s see what William Lane Craig actually argues. I shall simply take his form of the argument from his book, Reasonable Faith, which a competent debater would already be familiar with.

“What is meant by “fine tuning?” The physical laws of nature, when given mathematical expression, contain various constants (such as the gravitational constant) whose values are not determined by the laws themselves; a universe governed by such laws might be characterised by any of a wide range of values for these constants. Take, for example, a simple law like Newton’s law of gravity F= Gm1m2/r2. According to this law, the gravitational force F between two objects depends not just on their respective masses m1 and m2 and the distance between them r, but also on a certain quantity G which is constant regardless of the masses and distance. The law doesn’t determine what value G actually has. In addition to these constants, moreover, there are certain arbitrary physical quantities, such as the entropy level, which are simply put into the universe as boundary conditions on which the laws of nature operate. They are also independent of the laws. By “fine-tuning” one means that small deviations from the actual values of the constants and quantities in question would render the universe life-prohibiting or, alternatively, that the range of life-permitting values is exquisitely narrow in comparison with the range of assumable values.” – William Lane Craig, Reasonable Faith, 3rd edition, Crossway, (2008), p158

Craig goes on to note such examples. The universe is conditioned principally by the fine structure constant (or electromagnetic interaction), gravitation, the weak force, the strong force, and the ratio between the mass of a proton and the mass of an electron. Slight variations in some of these values would prevent a life-permitting universe. If either gravitation or the weak force were different even by 1 part in 10^100 then it would have prevented a life permitting universe.

There are also two other parameters governing the expansion of the universe, one relating to the density of the universe and one relating the speed of that expansion. If the expansion of the universe had been slower or faster by even one part in a hundred thousand million, million, million, then the universe would either have collapsed back in on itself or expanded too quickly preventing galaxies from forming. Other constants include the cosmological constant and the entropy per baryon in the universe, both of which are extraordinarily fine-tuned.

So, when we say fine-tuned we mean: the constants and quantities are just right for the existence of intelligent life.
And when we say life we mean:  Organisms with the properties to take in food, take in energy from food, grow, adapt to their environment, and reproduce whatever form such organisms might take.

“In order for the universe to permit life so defined, the constants and quantities have to be incomprehensively fine-tuned. In the absence of fine-tuning, not even atomic matter or chemistry would exist, not to speak of planets where life might evolve.” - William Lane Craig, Reasonable Faith, 3rd edition, Crossway, (2008), p159

Fine-tuning falls into three categories:
i)              The fine-tuning of the laws of nature.
ii)             The fine-tuning of the constants of nature.
iii)           The fine-tuning of the initial conditions of the universe.

Therefore, the argument can be formulated as:
1)   The fine-tuning of the universe is due to either physical necessity, chance, or design.
2)   It is not due to physical necessity or design.
3)   Therefore, it is due to design.

It would help if you at least got the argument right.

Claim Two: We observe that the universe is finely tuned for life, because we are alive.

This is essentially the classic Anthropic Argument. Allow me to quote none other than Dr. Craig once more:
“The argument is, however, based on confusion. Barrow and Tipler confuse the true claim
A.     If observers who have evolved within a universe observe its constants and quantities, it is highly probable that they will observe them to be fine-tuned for their existence.
With the false claim:
A’. It is highly probable that a universe exist which is finely tuned for the evolution of observers within it.
An observer who has evolved within the universe should regard it as highly probable that he will find the constants and quantities of the universe fine-tuned for his existence; but he should not infer that it is therefore highly probable that such a fine-tuned universe exist.” - William Lane Craig, Reasonable Faith, 3rd edition, Crossway, (2008), p165

Craig goes on to give the following example:
“You are dragged before a firing squad of one-hundred trained marksman to be executed. The command is given: “Ready! Aim! Fire!” You hear the deafening roar of the guns. And then you observe that you’re still alive, that the one hundred marksmen missed! Now what do you conclude? “I really shouldn’t be surprised at the improbability of their all missing because if they hadn’t all missed, then I wouldn’t be here to be surprised about it. Since I am here, there’s nothing to explain!” Of course not! While it’s correct that you shouldn’t be surprised that you don’t observe that you are dead (since if you were dead, you couldn’t observe the fact), nevertheless, it doesn’t follow that you shouldn’t be surprised that you do observe the fact that you are alive. In view of the enormous improbability of all the marksmen’s missing, you ought to be very surprised that you observe that you are alive and so suspect that more than chance alone is involved, even though you’re not surprised that you don’t observe that you are dead.” - William Lane Craig, Reasonable Faith, 3rd edition, Crossway, (2008), p165-166

Again, if you were familiar with Craig and his arguments, then you would already have known that Craig is aware of this “response” and has already replied to it.

Claim Three: The universe is actually largely hostile and devoid of life. Life only inhabits a small part of the universe.

And? Oh, I’m sorry, you seem to be suffering from the delusion that this counts as an argument against fine-tuning. How does this in any way change the fact that, in order for intelligent observers to exist, the constants and quantities of nature need to be extraordinarily fine-tuned? In fact, it actually serves to support the argument. In a universe that is overwhelmingly hostile, we manage to find life. Although that is a separate argument. In order for life to even be capable of forming, the universe needs to be fine-tuned. THAT is the current argument.

Claim Four: We don’t know the available range.

Except even if it were the case that the range of possible universes were very narrow, we would still be presented with many variables requiring fine-tuning. Moreover, in the absence of any physical reason to think that these values are constrained, we are therefore justified in assuming a principle of indifference and assume that the probability of our universe’s existing is the same as the probability of any other universe’s existing.

You blather on about the shuffling of cards, where the initial probability is 1 in 52, and this probability then increases with the shuffling of cards and so on. I find it curious that you seem to think that shuffling the cards has any effect on the likelihood of drawing a particular card. When we put the card back and shuffle them, there still remains the exact same number of cards. The only way probability would become lower is if we consider drawing the same card in a row. There is a 1 in 52 chance of drawing a specific card, but the chance of picking the same card many times in a row is much lower. I would be very interested in example of people getting the same card multiple times in a row, or rolling the same number on a dice multiple times in a row.

Of course, this does nothing to circumvent the probability of the constants of nature having the value that they do being extraordinarily low. Suppose that we take 10^17 silver dollars and lay them on the face of Texas. They will cover all of the state two feet deep. Now mark one of these silver dollars and stir the whole mass thoroughly. Blindfold a man and tell him he must pick up one silver dollar. What chance would he have of getting the right one? If we placed a single white ball in a sea of a billion, billion, billion black balls, if every time a black ball was drawn, what is the probability of getting a white ball?

More specifically gravitation and the weak force are both fine-tuned to one part in 10^100, the density of the universe is fine-tuned to one part in 10^60, the cosmological constant is fine-tuned to one part in 10^120 and the entropy per baryon in the universe is fine tuned to one part in 10^10^123. Oh, and the expansion rate of the universe is fine tuned to one part in a hundred, thousand, million, million. Of course, you think such odds are easy to overcome. Image then 10^60 bullets, one of which is a blank. You are told to select one, put it into a gun, hand the gun to an expert marksman, and then have them shoot you. I assume that you would have no problem with such odds.

Claim Five: The design hypothesis is an argument from ignorance, akin to saying “a magic man done it.”

Sorry, chump, but appeal to ridicule does not make a valid argument. Again, if you actually read Craig’s work, as opposed to pretending that you have, then you would know that the first premise of Craig’s fine-tuning Teleological argument is: The fine-tuning of the universe is due to either physical necessity, chance or design.

He then argues for the truth of premise 2, and the conclusion of the design hypothesis by means of appealing to the best explanation. In what way is this an “argument from ignorance?”

In order to falsify the design hypothesis, you need to either show that either chance or physical necessity is a more plausible option. Yet all we are met with is your incessant whining.

Claim Six: Fine-tuning really refers to circumstances when the parameters of a model must be adjusted very precisely in order to agree with observations. Scientific explanations of these values are therefore possible.

As if I needed further evidence that you don’t know anything about the subject whatsoever.

First of all, you conveniently don’t read out the following in the article of Craig’s you quote from:
“Most importantly, inflationary models require the same fine-tuning which some theorists had hoped to eliminate via such models.”

You seem to have this nasty habit of only reading out the parts you want to… all the whilst complaining that theists are the ones who quote mine. Ah, the irony.

Indeed, in order to produce such an enormous inflationary rate of expansion — and to result in the necessary values for our universe’s critical density — inflation theories rely upon two or more parameters to take on particularly precise values. So precise are these values that the problem of fine-tuning remains and is only pushed one step back.

Secondly, this article was written in 1996, whereas in the 3rd edition of Reasonable Faith, which was published in 2008, Dr. Craig says:
“Observations indicate that at 10-43 second after the Big Bang the universe was expanding at a fantastically special rate of speed with a total density close to the critical value on the borderline between recollapse and everlasting expansion.” – William Lane Craig, Reasonable Faith, 3rd Edition, Crossway, (2008), p158

He then notes the two values such inflation requires to be fine-tuned. You might want to get a little more up-to-date.

Thirdly, this only accounts for certain values, but not all of them, so we are still presented with constants and values with specifically fine-tuned values that require an explanation.

What then, of the multiverse that you appeal to? Whilst keen theoreticians talk about the multiverse as if it is a really existing thing that can be inferred from the evidence… when in reality, the only reason the multiverse is discussed at all is because:
a)    certain models require there to be a multiverse just to make them work (thus violating Occam’s razor by needlessly multiplying entities beyond necessity) and
b)   In order to multiply their probabilistic resources in order to reduce the improbability of the occurrence of fine-tuning.

Let’s quote some Paul Davies:
"The general multiverse explanation is simply naive deism dressed up in scientific language. Both appear to be an infinite unknown, invisible and unknowable system. Both require an infinite amount of information to be discarded just to explain the (finite) universe we observe." - Paul Davies, from Bernard Carr, ed., Universe or Multiverse?, (Cambridge: Cambridge University Press, 2007), p495

George Ellis:
“What is new is the assertion that the multiverse is a scientific theory, with all that implies about being mathematically rigorous and experimentally testable. I am skeptical about this claim. I do not believe the existence of those other universes has been proved—or ever could be. Proponents of the multiverse, as well as greatly enlarging our conception of physical reality, are implicitly redefining what is meant by “science.” – George F. R. Ellis, Does the Multiverse Really Exist?, Scientific American Magazine, July 19th 2011, http://www.scientificamerican.com/article.cfm?id=does-the-multiverse-really-exist (Accessed August 21st 2011)

However, let’s take the multiverse seriously for a second. In what way does this mitigate the problem of fine-tuning? There are, actually a two different types of multiverse. The first is an unrestricted view that every possible world does, in fact exist. A similar view proffered by Max Tegmark is that everything that exists mathematically, exists physically. The second type is more restricted, in that a multitude of universes are generated by some kind of physical process or “multiverse generator.”

There are, of course, arguments why such an unrestricted multiverse is implausible. First of all, you have needlessly invoked the existence of all possible universes just in order to explain the existence of our own. So now we have not just one, but many universes that require explaining, and we still haven’t gotten round to explaining why our own universe exists.

Secondly, consider the following scenario: rolling a six-sided die 100 times, and having it come up on six each time. This is an improbable scenario, for which we would normally demand explanation. We shall refer to this scenario as DTx, with D representing the particular die and Tx representing the sequence “coming up 100 times on six.” Normally, we would never accept the proposition that scenario DTx simply occurred by chance; it would require an explanation. The reason for this is not just because it is improbable, but because it also conforms to an independently given pattern. To borrow a phrase from William Dembski, this is a case of “specified complexity.”

Now, for any possible state of affairs S, such as DTx, the unrestricted multiverse implies that this state of affairs S is actual. With regard to explanation, for all possible states of affairs S, advocates of an unrestricted multiverse must claim that the fact that the unrestricted multiverse entails S either (i.a.) undercuts all need for explanation, or (i.b.) it does not. Further, with regard to some state of affairs S (such as DTx) that we normally and uncontroversially regard as improbable, they must claim that the fact that the unrestricted multiverse entails S either (ii.a) undercuts the improbability of S (since it entails S), or (ii.b) it does not.

Both (i.a) and (ii.a) would constitute a reductio ad absurdum of the unrestricted multiverse, since it would undercut both all justifications in science based on explanatory merits and ordinary claims of probability, such as the die example. Whereas if advocates of the unrestricted multiverse opt for (i.b) and (ii.b), then the mere fact that the unrestricted multiverse entails a life permitting universe neither undercuts the need to explain the life permitting universe, nor its overwhelming probability. Thus the unrestricted multiverse is a self-defeating hypothesis.

Thirdly, you earlier ridiculed appeals to the design hypothesis as tantamount to suggesting “a magic man done it,” yet how is appealing to an unrestricted multiverse any better, even if we grant the baseless supposition that inferring design is identical to appealing to magic? I would think that the unrestricted multiverse hypothesis is even worse then appealing to a “magic man.”

Lastly, and ironically enough, the unrestricted multiverse hypothesis implies that God exists, since the unrestricted multiverse hypothesis is the view that every possible world exists:
1)   It is possible that a maximally great being exists.
2)   If it is possible that a maximally great being exists, then a maximally great being exists in some possible world.
3)   If a maximally great being exists in some possible world, then it exists in every possible world.
4)   If a maximally great being exists in every possible world, then it exists in the actual world.
5)   If a maximally great being exists in the actual world, then a maximally great being exists.
6)   Therefore, a maximally great being exists.

God is either necessary or impossible and since the concept of a maximally great being is intuitively and a posteriori a coherent notion, then it follows that He exists given the unrestricted multiverse hypothesis. Of course, there are reasons for concluding that God exists based on ontological arguments that do not rely on such an extravagant unrestricted multiverse hypothesis, indeed, even this version of the ontological argument does not rely on their being an unrestricted multiverse, or even a multiverse at all.

However, what of a more restricted version of the multiverse hypothesis? That is to say, there is a physical “multiverse generator” either via inflation or super-string cosmology and so on. Such an appeal, whilst more realistic, is even more problematic. First of all, there needs to be a plausible mechanism for the generation of universes within the multiverse. Let’s consider the inflationary multiverse. In order to explain the fine-tuning of our life-permitting universe, it would need to hypothesise one or more mechanisms or laws that will do the following four things:
i)              cause the expansion of a small region of space into a very large region.
ii)             generate the very large amount of mass-energy needed for that region to contain matter instead of merely empty space.
iii)           convert the mass-energy of inflated space to the sort of mass-energy we find in our universe.
iv)           cause sufficient variations amongst the constants of physics to explain their fine-tuning.

Conditions (i) and (ii) are met by two factors:
a)    the postulated inflation field that gives the vacuum a positive energy density.
b)   General Relativity dictates that space expands at an enormous rate in the presence of a large near-homogenous positive energy density.

Without either factor, there would neither be regions of space that inflate, nor would these regions have the mass-energy necessary for a universe to exist. Condition (iii) is met by a combination of Einstein’s equivalence of mass and energy E=MC2 and the assumption that there is a coupling between the inflation field and matter fields. Finally, condition (iv) is achieved by combining inflationary cosmology with super-string/M-Theory, which allows for 10^500 possible worlds. We thus see that for the four conditions to be met, then the multiverse requires these factors.

There is also fifth condition (v) which is, however, not met, and that is that the laws governing the “multiverse generator,” whether of the inflationary type or some other, must be just right in order to produce life-permitting universes, rather than just dead ones. Even though certain physical laws can vary in superstring/M-Theory, there are certain fundamental laws and principles that underlie superstring/M-Theory that therefore cannot be explained as part of a multiverse selection effect. For example, without the principle of quantisation, all electrons would be sucked into the atomic nuclei, and hence, atoms would be impossible. Without the Pauli exclusion principle, electrons would occupy the lowest atomic orbit, and thus, complex and varied atoms would be impossible. Without a universally attractive force between all masses, such as gravity, matter would not be able to form sufficiently large material bodies (such as planets) for life to develop or for long-lived energy sources such as stars to exist. We thus see that the multiverse simply pushes the question back, since it requires fine-tuning itself.

Thirdly, the inflationary multiverse runs into a major problem in explaining the low entropy of the universe. The inflationary multiverse postulates that our universe exists in a true vacuum stat with an energy density that is nearly zero; whereas earlier, it existed in a false vacuum state with a very high energy density. The false vacuum state is expanding so rapidly that, as it decays, bubbles of true vacuum, or “bubble universes,” though they are expanding, they will be unable to keep up with the expansion of the false vacuum. Each bubble is then subdivided into domains bounded by event horizons, each domain constituting an observable universe.

This is essentially a more grandiose version of Ludwig Boltzmann’s hypothesis. Among the many worlds generated by inflation, there will be some worlds that are in a state of thermodynamic disequilibrium, and only such worlds can support observers. It is therefore not surprising  that we find ourselves in a world in a state of disequilibrium, since that is the only kind of world that we could observe. Of course, the same problems that were levelled against Boltzmann’s hypothesis can be levelled against the inflationary multiverse.


In a multiverse of eternally inflating vacua, most of the volume will be occupied by high entropy, disordered states incapable of supporting observers. There are thus only two ways in which observable states can exist:
1)   by being part of a relatively young, low entropy world, or;
2)   by being a thermal fluctuation in a high entropy world.

The objection then, is that is overwhelmingly more probable that a much smaller region of disequilibrium should arise  than one as large as our observable universe. Thus, in the multiverse of worlds, observable states involving such an initial low entropy condition will be an incomprehensibly tiny fraction of all the observable states there are. If we are just one random member of an ensemble of worlds, then we should be observing a smaller world. It would be overwhelmingly more probable that there really isn’t a vast, orderly universe out there, despite our observations; it’s all an illusion! Indeed, the most probable state is an even smaller universe consisting of a single brain that appears out of the disorder via a thermal fluctuation. Thus, in all probability, you alone exist, and everything you observe around you, including your physical body, is an illusion. This is a bizarre paradox known as “invasion of the Boltzmann brains.”

Thus appealing to the multiverse to explain the fine-tuning totally backfires, as we are required to assume even more statistical improbability and fine-tuning just to make it work. Whereas God, not being comprised of any material parts, is simple, and much less complicated than an infinite multiverse.

Claim seven: the multiverse is testable, and there is evidence that bubble universes have collided with ours.

Oh really? In Feeney, et al’s paper (http://arxiv.org/abs/1012.1995), they outline a particular method of analysing the CMB temperature fluctuations. Projected on the 2-dimensional surface of last scattering, the leftover signal would have azimuthal symmetry. They assume that a bubble collision has left a mark in the CMB that consists of a slightly different temperature in such an azimuthal patch. They use an algorithm that analyses the CMB temperature fluctuations in three ways. First, it searches for areas with such azimuthal symmetry, then, secondly, it searches for edges where the temperature makes a slight step. Lastly, if such an edge is found, it looks for the best parameters to reproduce the findings. They first use fake CMB data to test their algorithm. This stage of the simulation is represented by the following skymap.

Each quadrant of this skymap shows the same area, just mirrored vertically, horizontally, and diagonally. The upper-left quadrant shows the patch with the temperature variation from the bubble collision without fluctuations superimposed and the upper-right quadrant adds random fluctuations. The lower-left quadrant shows the result of looking for patches of azimuthal symmetry, and the lower-right quadrant shows the result of looking for edges with temperature steps. When they analyse the actual data, however, their algorithm found azimuthal symmetry but did not find edges, and azimuthally symmetric temperature modulations are not unique to bubble collisions. Feeney et al's results are nothing more than evidence that there are some features in the CMB. Thus, this is not evidence of an inflationary multiverse by a long shot.

One particular question that casts doubts over whether or not this is really evidence of an inflationary multiverse is: if our bubble was subject to a collision, why was it just nice enough to reveal itself in these CMB findings, rather than wiping us out? Their paper starts from the assumption that the signal of a bubble collision is of such a particular sort that it merely results in a small temperature difference. How is it that eternal inflation would result only in such a signal that is just barely observable rather than something more catastrophic? Of course, all of this is moot, as it does not matter if a multiverse really exists or not, as it is simply just insufficient to explain fine-tuning, as has already been demonstrated.

Claim eight: the constants of physics change in various locations across the universe.

Even if this is the case, then this does absolutely nothing to undermine the fine-tuning argument whatsoever. The argument is not that the constants are unable to change, but that such constants need to be at a particular value in order for life to evolve and we still find ourselves in a “Goldilocks zone.” Instead of the whole universe being fine-tuned for life, humanity finds itself in a corner of space where the values of the fundamental constants happen to be just right for it. You would need to show that life has formed where this value is different. Simply showing areas where the constant is different is not enough as such a hypothesis is fully consistent. In fact, if the laws, and constants, etc. were different elsewhere in the universe, then that would explain why the universe is so hostile and why the life-permitting region of the universe is so small.

However, there are even more reasons why this isn’t a problem for the fine-tuning argument than this. First of all, it does nothing to falsify the contention that the initial conditions of the universe needed to be just right in order for life to form. Second, it does nothing to falsify the contention that these values need to be just right in order for life to form.  Let’s say that in order for planets to form, constant X needs to be value Y and that in order for planets to stay together, constant X needs to be value Z. Once planets form, the value can change by any amount as long as it does not fall outside the range permitted by Z. Lastly, you would need to show that every example of fine-tuning is wrong, yet you have not even shown how the example of the fine-structure constant is wrong. The fact that alpha can be different does not have any bearing on the claim: alpha needs to have value X in order for life to form and even showing how this example is wrong would not automatically show how the other examples are wrong either.

You appeal to cyclic models, and say that they allow for varying values of the laws of physics. Since they allow for an eternally cycling universe, then every so often, a universe such as ours will appear in the cycle. Of course, there are problems with such cyclic models truly being eternal. I already addressed Loop Quantum Gravity, the Aguirre-Gratton model the Gott-Li CTC model and the Baum-Frampton model in my critique of your video on the Kalam Cosmological argument, but I shall briefly mention them here. Bojowald, open to the possibility of an irreversible rise in entropy as a function of time. So the fact that entropy rises, cycle by cycle, and would trip up a proposed past infinite cyclic model does not count against the viability of the loop quantum approach as a candidate for quantum gravity. Secondly, dark energy prevents there from being a truly cyclic universe whether it takes the form of a cosmological constant or of quintessence. The Baum-Frampton model itself requires fine-tuning in order to work not to mention other problems with the viability of the model.

The Aguirre-Gratton model denies the evolutionary continuity of the universe, which is topologically prior to t, and our universe. The other side of the de Sitter space is not our past. For the moments of that time are not earlier than t or any of the moments later than t in our universe. There is no connection or temporal relation whatsoever of our universe to that other reality. Efforts to deconstruct time thus fundamentally reject the evolutionary paradigm. As for the CTC model, it has unstable properties that prevent a CTC from being physically viable. Gott and Li’s solution to this, however, requires… you guessed it, fine-tuning in order to make it work. Thus, these models either do not establish an eternal universe, have internal problems that prevent them from being a viable option or require fine-tuning and so would only serve to push the question back further.

I shall now address the cyclic models I did not address in my critique. I did briefly mention Roger Penrose’s Conformal Cyclic Cosmology, but did not go into detail, so I shall do that here. The first is that the evidence that Penrose and Gurzadyan claimed supported the CCC model doesn’t. Penrose and Gurzadyan claimed to have found concentric low-variance circles at high statistical significance in the WMAP temperature skymaps. However, two independent studies were both unable to reproduce these results:

Penrose’s solution to the entropy problem is to suggest that the initial singularity is the same thing as the open-ended DeSitter-like expansion that our universe seems about to experience. According to Penrose, physically, in the very remote future, the universe “forgets” times in the sense that there is no way to build a clock with just conformally invariant material. This is related to the fact that massless particles, in relativity theory, do not experience any passage of time. With conformal invariance both in the remote future and at the Big Bang origin, he argues that the two situations could be physically identical so that the remote future of one phase of the universe becomes the Big Bang of the next. However, for this scenario to work, all massive fermions and massive, charged particles must disappear into radiation, including, for example, free electrons. There is just currently no justification for positing this.

Penrose’s CCC is also based on the Weyl Curvature Hypothesis and Paul Tod’s implementation of this idea within the general theory of relativity. Weyl curvature is a kind of curvature where the effect on matter is of a distorting or tidal nature, rather than the volume-reducing one of material sources. Penrose then suggests that the Weyl curvature is constrained to be zero, or at least very small, at the initial singularity of the actual physical universe. This mathematical technique is necessary to stitch a singularity to a maximally extended DeSitter expansion. Tod’s formulation of the Weyl Curvature Hypothesis is the hypothesis that a past-spacelike hypersurface boundary can be adjoined to a space-time in which the conformal geometry can be mathematically extended smoothly through it to the past side of this boundary. Penrose’s “outrageous proposal,” as he refers to it, is to suggest that we take this mathematical fiction seriously as something physically real.

The failing of this approach is the supposed correspondence between Weyl curvature and entropy. The correspondence between Weyl curvature and entropy seems clear enough when one is considering the structure of the initial Big Bang singularity, given its vanishingly small entropy state. But while the DeSitter-like end state of the universe also minimises Weyl curvature, its entropy is maximised. Like black holes obeying the Hawking-Bekenstein entropy law, DeSitter space has a cosmological horizon with entropy proportional to its area and it is generally believed that this state represents the maximum entropy that can fit within the horizon. Penrose regards the entropy of the cosmological horizon as spurious and invokes non-unitary loss of information in black holes in order to equalise the small entropy at the boundary. Penrose attributes the large entropy at late universe time to degrees of freedom internal to the black holes and suggest that in CCC, the universe’s entropy is renormalized so that we can discount the entropy contribution from the horizon when all black holes have evaporated.

However, the entropy of the cosmological horizon must have physical meaning, otherwise the physics of black hole decay, upon which Penrose’s scenario depends, would not work properly. Furthermore, black hole decay is actually a dynamic system that is the sum of the energy lost by Hawking radiation plus the energy gained by absorption of local matter created by thermal fluctuations due to the DeSitter Gibbons-Hawking temperature. These thermal fluctuations therefore suggest that the entropy of the cosmological horizon is a real physical manifestation. It therefore seems unwarranted to embrace Penrose’s position, and very few physicists have been persuaded by Penrose’s non-unitary brand of quantum physics. Whilst Weyl curvature is the same between the two states that Penrose wishes to say are identical, the entropy is not, therefore to the two states cannot be identical. Thus, CCC does not avert an absolute beginning and if it cannot be eternal, then this means there can only be a limited number of cycles, thus it does not explain fine-tuning.

The Veneziano-Gasperini pre-Big Bang inflation model suffers in that it fails to overcome the entropy problem. Furthermore, all the pre-Big Bang black holes should have coalesced into a massive black hole co-extensive with the universe by now, given infinite past time. Whereas, the Steinhardt-Turok Ekpyrotic cyclic model is subject to the Borde-Guth-Vilenkin theorem, and so has a beginning. There are also physical problems with such a model, as outlined here: http://arxiv.org/abs/hep-th/0202017

Claim nine: other forms of life could evolve given different laws/constants.

Sorry, but that just isn’t the case with the examples given. By life, as I have already explained, we mean organisms with the properties to take in food, take in energy from food, grow, adapt to their environment, and reproduce whatever form such organisms might take. One can wonder how such life forms would be able to evolve in a universe with no stars or planets, or a universe with no chemicals or atomic matter. This just only goes to show how vastly unfamiliar with the fine-tuning argument you are. Furthermore, weren’t you the one who earlier claimed that the universe is vastly hostile to life? Obviously not as hostile as you would have had us believe earlier.

Claim ten: many theists believe that life was created via a miracle.

And? There are atheists who believe in directed panspermia. That life on earth and even our universe as a whole was created by advanced aliens. This is simply irrelevant to the fine-tuning argument. Myself and other serious minded theists have no problem with evolution and abiogenesis.

Claim eleven: God could have made life possible in any universe.

God could have made a universe where Richard Dawkins was the pope and William Lane Craig was head of the British Humanist Association. What’s your point? Are you suggesting that it’s impossible for an omnipotent being to create the universe and all life within it in the manner in which our universe was created? You’re also assuming that God is omniderigent, i.e. all causing, which would undermine creaturely freedom. Not only is your argument irrelevant, it is also invalid. Just because God has the power to pluck every hair from you head, it is not follow that God has to exercise that power. Here’s a newsflash, not all Christians are Calvinists. I would try explaining to you Molinism, but I assume that it would be lost on you, given that you apparently have a hard time even understanding what the fine-tuning argument is even arguing.


This video of yours managed to be even more dismal, contrived, convoluted and mind numbingly stupid than your video on the Kalam cosmological argument. I think it is clear that you have no idea what you are talking about and haven’t even so much as bothered actually reading the arguments you are trying to refute. Then again, actually read what Craig, Robbins, et al. have to say on Fine-tuning would require you to actually read and think for yourself… Or you HAVE read their arguments and simply chose to blatantly misrepresent them and try and mollify them with non-answers and non-sequiturs. It amazes me that you are truly deluded enough to believe you even made so much as a scratch in the fine-tuning argument. I kind of felt guilty afterwhile, as it was so easy tearing your video to shreds that I was literally giggling with glee.

Friday 19 August 2011

Skydivephil: A Refutation, Part One

So, there is this video: http://www.youtube.com/watch?v=baZUCc5m8sE

It claims to be a decisive refutation of the Kalam Cosmological argument. However, it contains many, many glaring errors and omissions. I shall expose them here.

Claim one: the account in Genesis looks nothing like the account given by cosmology.

This is, of course, patently false. Whilst the linguistic nuances of Genesis would take a while to explain, let’s suffice it to say that Genesis does NOT teach that everything was poofed into being across the time span of 1 week, 6000 years ago.

This is actually a modern view of Genesis that was first formulated… in the 1600s by one James Ussher. Before this, practically nobody held to a young earth, and the few that did, did not base their opinions and arguments on the Bible. For example, St. Augustine, who believed that the world was younger than the estimations of the pagan Greeks, Egyptians, et al. was adamant that such views were not based on the Biblical texts, as he writes in De Genesi ad Litteram Libri Duodecim (On The Literal Meaning of Genesis):

Usually, even a non-Christian knows something about the earth, the heavens, and the other elements of this world… and this knowledge he holds as being certain from reason and experience. Now, it is a disgraceful and dangerous thing for an infidel to hear a Christians, presumably giving the meaning of Holy Scripture, talking nonsense on these topics, and we should take all means to prevent such an embarrassing situation, in which people show up vast ignorance in a Christian and laugh it to scorn. The shame is not so much that an ignorant individual is derided, but that people outside the household of the Faith think our sacred writers held such opinions, and, to the great loss of those for whose salvation we toil, the writers of our Scripture are criticised and rejected as unlearned men. If they find a Christian mistaken in a field which they themselves know well and hear him maintaining his foolish opinions about our books, how are they going to believe those books in matters concerning the resurrection of the dead, the hope of eternal life, and the kingdom of Heaven, when they think their pages are full of falsehoods on facts which they themselves have learnt from experience and the light of reason? Reckless and incompetent expounders of Holy Scripture bring untold trouble and sorrow on their wiser brethren when they are caught in one of their mischievous false opinions and are taken to task by those who are not bound by the by the authority of our sacred books. For then, to defend their utterly foolish and obviously untrue statements, they will cal upon Holy Scripture for proof and even recite from memory many passages which they think support their position, although they understand neither what they say not the things about which they make assertions.

He goes on to say:
It is also frequently asked what our belief must be about the form and shape f heaven, according to Sacred Scripture. Many scholars engage in lengthy discussions on these matters, but the sacred writers with their deeper wisdom have omitted them. Such subjects are of no profit for those who seek beatitude. And what is worse, they take up very precious time that ought to be given to what is spiritually beneficial. What concern is it of mine whether heaven is like a sphere and Earth is enclosed by it and suspending in the middle of the universe, or whether or not heaven is like a disk and the Earth is above it and hovering to one side.”

And:
One does not read in the Gospel that the Lord said: ‘I will send you the Paraclete who will teach you about the course of the sun and moon.’ For he willed to make them Christians, and not mathematicians.

However, back again to Genesis and modern science. Scientist, and former agnostic, Andrew Parker, in his book The Genesis Enigma, has outlined several areas where the creation account of Genesis is, rather surprisingly, extremely accurate. For example, on the first “day” God creates the “heavens and the earth” but “without form” and commands “let there be light.” A perfect description of the big bang, some 13.75 billion years ago, an unimaginable explosion of pure energy and matter “without form” out of nothing – the primordial Biblical void. Parker, like myself, rejects Creationism and Intelligent Design. I recommend reading his book in its entirety, rather than just, you know, sticking your fingers in your ears and ignoring what the man has to say. The Ancient Israelites simply lacked the language to describe things in a way modern English speakers would immediately understand and recognise. Ancient languages were much simpler and have much less words than modern English, and so certain words could have different meanings depending on the context. For example, day could mean a 24-hour day… but in other cases it meant an indefinite period. So, we can hardly fault the Biblical writers for not writing a science textbook.

If we are to dismiss the Genesis account for using “unscientific” language, then that would mean we would no longer be able to use terms such as “sunrise” and “sunset” since the sun does not actually rise or set. Thus your claim is incredibly naïve and indicative that you haven’t spent any amount of time critically examining the Bible. As for the Biblical writers themselves, they weren’t concerned with writing modern cosmology, and there isn’t any reason to suppose that they should have been. As scholar John H. Walton notes, ancient cosmology was function-oriented and their views of how the cosmos worked where metaphysical, rather than physical. For more I recommend: The Lost World of Genesis One by John H. Walton, Ancient Near Eastern Thought and the Old Testament by John H. Walton and The Bible Among the Myths by John N. Oswalt. I also recommend KA Kitchen’s discussion of Genesis in his work On The Reliability Of The Old Testament. Kitchen, despite being a Biblical maximalist, does not believe that Genesis refers to a literal historical account but rather functions as what he calls “proto-history.”

Claim two: Kalam contradicts itself because it posits that actual infinites cannot exist whilst endorsing a singularity, which is infinite.

This objection commits the fallacy of equivocation in that it confused actual infinities and potential infinities. An actual infinity is a collection of definite and discreet members whose number is greater than any natural number. Whereas a potential infinity is a collection that approaches infinity as a limit, but never gets there. Such a collection is really indefinite, rather than infinite. Let’s consider Zeno’s paradoxes. Before Achilles can get to the finish line, he has to reach halfway, and before he can get halfway, he must reach a quarter of the way. Before that, he must reach an eighth of the way, and before that a sixteenth of the way. You can carry on dividing the length a potentially infinite amount of times, but you will never reach an infinitieth division. Furthermore, it does not follow that Achilles will keep running for infinity, as the line is still finite.

In regards to the singularity, I shall now quote Quentin Smith:
If the universe is finite, and the big bang singularity a single point, then at the first instant the entire mass of the universe is compressed into a space with zero volume. The density of the point is n/o, where n is the extremely high but finite number of kilograms of mass in the universe. Since it is impermissible to divide by zero, the ratio of mass to unit volume has no meaningful and measurable value and in this sense is infinite.” - William Lane Craig and Quentin Smith, Theism, Atheism, and Big Bang Cosmology, Oxford: Clarendon, 1993, p209-10.

Claim Three: We don’t really know that the universe began to exist because GR breaks down. We need a Quantum Mechanical description.

Ironically enough, Loop Quantum Gravity, which you appeal to, does not predicate upon an eternal universe. LQG takes the view that spacetime is quantised, that is, it is divided into discreet constituent parts. In this model, singularities do not really exist, as, according to this view, there is a minimum size to nature that prevents singularities, and so space and time do not come to an end in a singularity. It is essentially a variation of the Tolman cyclic model. There is only one universe, there are only three spatial dimensions and there is no “free energy” that is injected into the universe from outside, as with some other cosmological models. The two problems with the Tolman model were: there is no known mechanism for producing a cyclic bounce and thermodynamic considerations show that the universe of the present should have reached thermodynamic equilibrium and suffered heat death by now. The primary proponent of the LQG view is Martin Bojowald, and he believes that he has solved both of these problems.

In regards to the first problem, the major difficulty has been resolving a type of chaos predicted to occur near classical singularities, which has been shown to be calmed by a loop quantum approach. The main problem lies in trying to resolve the second issue. How can there be truly cyclic behaviour when the second law of thermodynamics predicts that entropy must increase from cycle to cycle? Using a semi classical approach to calculate entropy, the end of our current cycle in the big crunch would need to differ in entropy from the big bang by a factor of 10^22. Bojowald has three solutions to this.

The first solution is that there is a large, unobservable part of the initial singularity that is a genuine generic manifold, that is, a state of maximum entropy featuring random inhomogeneity and anisotropy. Hence, the entropy of the initial and final singularities would be similar. An inflation mechanism (of a small patch of this manifold) would then produce the requisite homogeneity and isotropy of the current FRW universe. The problem with this solution is that using an anthropic observer selection argument, the size of the inflationary patch we should expect to see should be much smaller based on thermodynamic criteria. Therefore, it is exceedingly improbable to find ourselves as the product of an inflationary event of a generic manifold as Bojowald, et al. suggests. Aside from this, this argument takes no account of entropy generation during the cycle. Over infinite time, this would have to be a factor, although it may be negligible for a single cycle. This entropy may be negligible compared to black hole formation. But over infinite cycles, it would add up.

The second suggestion is that the “classical” understanding of entropy is misleading – entropy as classically calculated may be too high. This solution mitigates the problem of entropy growth but does not resolve it. While entropy as classically calculated may be too high, the quantum approach still recognizes entropy (and entropy growth) as a genuine physical quantity. Black holes are still highly entropic. So their formation during a cycle, especially if one lands in a Big Crunch, would still cause a final manifold to have more entropy than an initial manifold. One would still expect that this situation would imply a beginning, since it implies a heat death given infinite cycles.

The third suggest is that, cycle-by-cycle, the entropy state is genuinely reversible in the cyclic version. However, if there is no recollapse at large volume, the universe would just have gone through a single bounce and will keep expanding. The end may be such a heat death, but since we don’t know the field content of our universe (as evidenced by the dark matter and dark energy puzzles) the far future may be quite different from what it appears to be now. Thus, Bojowald and his colleagues are not committed to a model with an infinite amount of past cycles. They are open to the possibility of an irreversible rise in entropy as a function of time. So the fact that entropy rises, cycle by cycle, and would trip up a proposed past infinite cyclic model does not count against the viability of the loop quantum approach as a candidate for quantum gravity.

Aside from the entropy issue, there remains the issue of dark energy, which may have the potential to stop cycling and induce an open-ended expansion. The current empirically observed dark energy effect, for example, appears adequate to produce an open-ended accelerated expansion. This result would be definitive if the dark energy were of the form of a cosmological constant but if an entropy gain, cycle-to-cycle is denied, one can never have more than one “cycle.” The cosmological constant would have led to open-ended expansion the first time. Hence, the initial singularity (our Big Bang) represents an absolute beginning. If the dark energy were of the form of quintessence, then it would be possible that its value could reverse and be consistent with a collapse phase, even given the current observational evidence. However, a new problem would arise. After the bounce and the following energy transfer, different modes of the matter fields will become excited such that the next bounce will differ from the preceding one. On some particular cycle a value for the quintessence term would be such that it would lead to an open-ended expansion. Given an infinite number of rolls of the dice, any nonzero probability that quintessence could produce an open-ended expansion would be sufficient to do so. An open-ended expansion implies that that the overall number of cycles has been finite, and hence, the model would not be beginningless.

William Lane Craig is actually well aware of LQG and various other cosmological models, including the Baum-Frampton model, and discusses them in depth in the chapter he co-wrote with James Sinclair on the Kalam Cosmological Argument in The Blackwell Companion to Natural Theology, a paperback version of which shall be available soon.

It is also interesting how you quote Roger Penrose, when he says things like:
It has been a not uncommon view among confident theoreticians that we may be "almost there", and that a "theory of everything" may not lie far beyond the subsequent developments of the late twentieth century. Often such comments had tended to be made with an eye on whatever had been the status of the "string theory" that had been current at the time. It is harder to maintain such a viewpoint now that string theory has been transmogrified to something (M- or F-theory) whose nature is admitted to being fundamentally unknown at present... From my own perspective, we are much farther from a "final theory" even than this... Various remarkable mathematical developments have indeed come out of string-theoretic (and related) ideas. However, I remain profoundly unconvinced that they are very much other than just striking pieces of mathematics albeit with some input from some deep physical ideas. For theories whose space-time dimensionality exceeds what we directly observe (namely 1+3), I see no reason to believe that, in themselves, they carry us much farther on the direction of physical understanding." - Roger Penrose, The Road to Reality, (London, Jonathan Cape, 2004), p1010

There are also doubts about his Conformal Cyclic Cosmology amongst physicists today. For example, http://arxiv.org/abs/1012.1268 and http://arxiv.org/abs/1012.1305

In regards to QM models though, in addition to the LQM model, there are others. For example, the Baum-Frampton model, which you name. It overcomes the problem of entropy, by positing that a type of dark energy pervades the universe, and that the ratio between pressure and energy density (its equation of state) is less than -1. Dark energy then causes the acceleration of the expansion of the universe to become so great that the visible horizon begins to shrink over time to the point where galaxies, solar systems, planets, and, eventually, even atoms, get ripped apart. This is known as the big rip. However, Baum and Frampton propose that, as we approach the big rip, expansion stops as the universe splits into non-interacting, causally disconnected patches. The universe has expanded so much that most of these patches are empty of normal matter and radiation and only contain this dark energy (also known as phantom energy.) The patches only containing phantom energy do not undergo a big rip, but instead contract by an amount exactly equal to expansion the universe experienced since the big bang and, prior to reaching a singularity, the contracting patch rebounds due to phantom energy and this cycle goes on for infinity. Every patch that undergoes this breaks off into new universe, and so, in addition to being an infinite cyclic model, it also assumes an infinite multiverse.

There are many problems with this model. First, in order to avoid the Borde-Guth-Vilenkin theorem, the average contraction must equal exactly the average expansion (for every geodesic). But how is this to be done without introducing explicit fine-tuning? There is no reason that deflation of scale factor will exactly match post–Big Bang expansion, which even Frampton himself has admitted. Second, globally, entropy should have already grown to an infinite value. How is it that the various regions remain causally disconnected? The key factor in this model is the method for jettisoning the universe’s entropy. As Baum and Frampton emphasize, if matter is retained during a contraction phase presence of dust or matter would require that our universe go in reverse through several phase transitions which would violate the second law of thermodynamics. We thus require that our universe comes back empty! Now, globally, over infinite past time, the model posits that an infinite amount of matter and radiation (hence, infinite entropy) has been produced. How is it, then, that the entropy density avoids achieving an infinite quantity? Frampton and Baum’s solution is to remove entropy to an unobservable exterior region. However, simply shoving the entropy into other “universes” raises the question whether, given infinite time, a static space and a countable infinity of realms within the multiverse, the realms must not eventually collide. This is a current issue with the model that needs to be addressed,

Cosmologist Xin Zhang argues that the causal disconnection mechanism at turnaround does not work precisely because the disconnected patches do come back into causal contact. Frampton did point out a possible error in Zhang’s critique, but this has since been rectified. As it stands, there are still several severe obstacles existing in cyclic cosmology, such as the density fluctuation growth in the contraction phase, black hole formation, and entropy increase, which can obstruct the realization of a truly cyclic cosmology. However, there are even more problems than these. Cosmologist Thomas Banks contends that a contracting space filled with quantum fields will have an “ergodic” property as the space shrinks. Its fields become highly excited as one approaches the end of contraction and these fields will produce chaotic fluctuations. Spontaneously created matter with a different equation of state will dominate the energy density. That, and the inhomogeneity of the fluctuations, will prevent cycling. Banks and Fischler even suggest that the fields will spontaneously produce a dense “fluid” of black holes leading to a condition they call a “Black Crunch” for arbitrary states approaching full contraction. Hence, it appears that the Baum–Frampton cyclicity will not work.

One thing which you conveniently forget to mention, are models such Alexander Vilenkin’s Quantum Tunneling model and the Hartle-Hawking Quantum Gravity model, both of which, if they were true, would support the beginning of the universe.

Claim four: Borde-Guth-Vilenkin theorem does not prove that the universe began to exist

Sure there are exceptions to the BGV theorem, but, as we have seen, such models, such as the Baum-Frampton and LQG do not establish an eternal universe and the eternal inflation model you appeal to is not such an exception. Our argument shows that null and time-like geodesics are, in general, past-incomplete in inflationary models, whether or not energy conditions hold, provided only that the averaged expansion condition Hav > 0 holds along these past-directed geodesics. A remarkable thing about this theorem is its sweeping generality. It makes no assumptions about the material content of the universe. It does not even assume that gravity is described by Einstein’s equations. So, if Einstein’s gravity requires some modification, their conclusion will still hold. The only assumption that the BGV theorem makes was that the expansion rate of the universe never gets below some nonzero value, no matter how small. This assumption should certainly be satisfied in the inflating false vacuum. The conclusion is that past-eternal inflation without a beginning is impossible.

Alexander Vilenkin states:
It is said that an argument is what convinced reasonable men and proof is what it takes to convince even an unreasonable man. With the proof now in place, cosmologists can no longer hide behind a past-eternal universe. There is no escape, they have to face the problem of a cosmic beginning." - A. Vilenkin, Many Worlds in One: The Search for Other Universes, (New York: Hill and Wang, 2006), p176

Interestingly, new developments show that eternal inflation models cannot even be future eternal either. For example: http://arxiv.org/abs/1106.3542 Andrei Linde, who was the chief exponent of eternal inflation, has concurred with Borde, Guth and Vilenkin’s conclusion. What is even more hilarious is that in a paper of Guth’s that you quote, I could not help but notice the following quote:

The theorem does show, however, that an eternally inflating model of the type usually assumed, which would lead to Hav > 0 for past directed geodesics, cannot be complete.

Didn’t you complain about quote-mining earlier? Evidently, you have no problem with inconsistency when it comes to yourself. Again, and this is a point that William Lane Craig has stressed many times in print, THERE ARE EXCEPTION TO THE BORDE-GUTH-VILENKIN THEOREM. Eternal inflation just is not one of them. As I have already said, William Lane Craig discusses every exception to the BGV theorem in his and James Sinclair’s chapter on Kalam in The Blackwell Companion to Natural Theology.

You appeal to Anthony Aguirre, who, along with Steve Gratton, has proposed a model that evades the theorem, in which the arrow of time reverses at the t = –infinity hypersurface, so the universe ‘expands’ in both halves of the full de Sitter space. It is possible, then, to evade the BVG Theorem through a gross deconstruction of the notion of time. Suppose one asserts that in the past contracting phase the direction of time is reversed. Time then flows in both directions away from the singularity. Is this reasonable? The answer is: no, for the Aguirre–Gratton scenario denies the evolutionary continuity of the universe, which is topologically prior to t, and our universe. The other side of the de Sitter space is not our past. For the moments of that time are not earlier than t or any of the moments later than t in our universe. There is no connection or temporal relation whatsoever of our universe to that other reality. Efforts to deconstruct time thus fundamentally reject the evolutionary paradigm.

Claim five: you ask: William Lane Craig never mentions these models.

Oh really? Never? Are you sure about that? I guess that is why he talks about them… in depth… in The Blackwell Companion to Natural Theology. He deals with the Gott-Li CTC model, the Linde Eternal-Inflation model, infinite contraction de Sitter cosmology, asymptotically static models, the Baum-Frampton infinite cyclicity phantom bounce model, the Aguirre-Gratton time reversal model, the Veneziano-Gasperini pre-big bang inflation model, the Steinhardt-Turok Ekpyrotic cyclic model, Bojowald’s Loop Quantum Gravity model, Vilenkin’s quantum-tunnelling model and the Hartle-Hawking no boundary model. He deals with many, but not all, of these models in his book Reasonable Faith too, and has many articles on his website, Reasonable Faith. He does not mention them often in debates mostly for sake of brevity but also because they rarely get bought up in Q&A sessions.

Claim six: if God does not require a cause, then we can say that the universe does not require a cause.

Except, that the universe did begin to exist. Which is why we are beleaguered by all these cosmological models. They attempt to explain how the universe began to exist, either by saying our universe emerged from a multiverse, or by positing that our universe is one in a never-ending cycle big bangs and crunches, and so on. If the universe did not begin to exist, then why are cosmologists trying to explain how it began to exist?

You appeal to Alexander Vilenkin’s Quantum Tunnelling model, as well as events on the sub-atomic level to try and circumvent the first premise: everything that begins has a cause. Except these do not show things beginning to exist from nothing. One thing which you again conveniently neglect to mention is the fact that when physicists talk about “nothing” they typically do not mean what philosophers mean by nothing. In philosophy, nothing is literally non-being, a complete absence of being. But in physics, nothing is applied to a variety of things that are not nothing, and these two examples are no exception. Vilenkin’s own model is, at every point, a case of something to something. In Vilenkin’s model, he starts with a small, closed, spherical universe filled with “false vacuum” and containing some ordinary matter that then collapses to a point and then tunnels into a state of inflationary expansion. This approach still does not solve the problem of creation; rather it has moved the question back one step: to the initial, tiny, closed, and metastable universe. This universe state can have existed for only a finite time. Where did it come from? The mathematical description of a tunneling universe suggests that time, space and matter came into being a finite time ago. “Nothing” refers to the “prior” state. A similar approach is the Hartle-Hawking model. In both cases, the universe begins to exist. Thus, the question of, what caused the universe, still remains.

As for virtual particles, these do no come into being “out of nothing.” For one thing, there are multiple interpretations of Quantum Mechanics. Some are indeterministic, but others are not. Secondly, indeterminacy is not the same as coming into being from nothing. Thirdly, virtual matter-antimatter particle pairs emerge from the quantum vacuum, which is not nothing but a sea of fluctuating energy that pervades all of space. They then promptly annihilate each other and get absorbed back into the vacuum. This is not nothing by any stretch of the imagination. Lastly, it is not creation ex nihilo but creation ex nihilo uncaused that is a problem. As to suggest that something can come into being from non-being uncaused is akin to quit doing serious metaphysics and akin to appealing to magic. It therefore becomes inexplicably why just anything doesn’t pop into being from non-being. To suggest that only universes or particles can pop into being from non-being is special pleading.

You complain that such a nothing does not exist anywhere in the universe. True, but given that spacetime came into being, then it follows that prior to this there was… ah, that’s right… nothing. Again you appeal to Vilenkin… all the whilst neglecting to mention that his model is not creation out of nothing. The sheer delicious irony of you of accusing theists of being disingenuous is truly hilarious. You complain about William Lane Craig noting that not all physicists agree that Quantum events are indeterministic. As I have already explained, there are multiple interpretations of QM, and they are all on equal footing scientifically at the moment. One interesting thing to point out is that advancements in quantum entanglement have allowed scientists to reduce the amount of quantum indeterminacy. However, even in indeterministic interpretations… events are still caused, which even Victor Stenger admits in his book God: The Failed Hypothesis:
We have a highly successful theory of probabilistic causes–quantum mechanics.” – Victor Stenger, God: the Failed Hypothesis, Prometheus Press, (2007), p124

Stenger, of course, tries to the whole semantic switcheroo between something being indeterministic and something being uncaused, but I think by now it should be obvious that you have no idea what you are talking about.

Whoops. Ironically enough, there is a Paul Davies quote I see atheists like to pass around, whereby he notes particles coming into being from “nothing.” What is often cut out, or simply glossed over quickly is the part where he says:
It is, of course, a big step from the spontaneous and uncaused appearance of a subatomic particle-something that is routinely observed in particle accelerators-to the spontaneous and uncaused appearance of the universe.


Never mind that quantum fluctuations aren't really "uncaused" and might not actually be spontaneous.

Claim seven: the cause of the universe does not need to be God.

This is only evidence that you have never read any of Craig’s work. Then again, since you feature clips from some of Craig’s debates and talks, then it seems odd to me how you could have missed Craig’s defense of this. Even in his debates, where he has to fit everything into a certain amount of time, he still offers a dumbed down argument for why the first cause is God. As the cause of spacetime, such a cause must transcend both. Such a cause must also be extremely powerful, in order to be capable of creating the universe. He also gives a number of arguments for why the cause must be personal:

Firstly, there are only two types of explanation, personal explanations, which refer to the choices of personal agents, and scientific explanations, which refer to the workings of unconscious, material, natural phenomenon. Since there was neither space nor time prior to the beginning of the universe, there cannot be a scientific explanation of the universe by definition. Thus, the only remaining option is a personal explanation. Secondly, there are two types of immaterial thing. Abstract objects, like numbers, or minds. Since abstract objects cannot cause anything, the only option left to us is a mind. Lastly, unconscious things cannot cause anything by themselves. They need to be caused by something else. Thus, a personal being that freely chooses to bring its effect into being is the only remaining option.

Claim eight: Gödel’s Closed Timelike Curve shows how the universe could have possibly created itself out of nothing.

Cosmologists Gott and Li propose a model according to which the early universe (only) is a closed time loop that occasionally gives “birth” to a universe like ours. Aside from the metaphysical and philosophical impossibility of something creating itself, the primary physical problem confronting CTC models in general is their violation of the so-called Chronology Protection Conjecture (CPC). Gott and Li indicate that the CTC should be in a pure vacuum state containing no real particles or Hawking radiation and no bubbles. The reason for this is because stray radiation would destroy the CTC. However, the CTC has properties that are so unstable that it would quickly destroy itself. A description of this is given by Kip Thorne, who envisages a scenario that allows a local time machine to exist with one end on a spaceship departing Earth with his wife Carole and the other end on Earth with him in his living room. Imagine that Carole is zooming back to Earth with one wormhole mouth in her spacecraft, and I am sitting at home on Earth with the other. When the spacecraft gets to within 10 light- years of Earth, it suddenly becomes possible for radiation (electromagnetic waves) to use the wormhole for time travel: any random bit of radiation that leaves our home in Pasadena traveling at the speed of light toward the spacecraft can arrive at the spacecraft after 10 years’ time (as seen on Earth), enter the wormhole mouth there, travel back in time by 10 years (as seen on Earth), and emerge from the mouth on Earth at precisely the same moment as it started its trip. The radiation piles right on top of its previous self, not just in space but in spacetime, doubling its strength. What’s more, during the trip each quantum of radiation (each photon) got boosted in energy due to the relative motion of the wormhole mouths (a “Doppler- shift” boost). After the radiation’s next trip out to the spacecraft then back through the wormhole, it again returns at the same time as it left and again piles up on itself, again with a Doppler- boosted energy. Again and again this happens, making the beam of radiation infinitely strong. In this way, beginning with an arbitrarily tiny amount of radiation, a beam of infinite energy is created, coursing through space between the two wormhole mouths. As the beam passes through the wormhole it will produce a singularity and probably destroy the wormhole, thereby preventing a time machine from coming into being in the first place.

Gott and Li proposed a solution that gets round this by proposing a special initial state for the universe: a zero-temperature empty space called an “adapted Rindler vacuum.” It is specially built and balanced such that it does not develop the destructive effect suggested by Thorne earlier. However, this was picked apart by William Hiscock, who noted that, first of all, the Gott–Li choice of initial conditions is highly fine-tuned. In fact, Gott–Li’s vacuum is of “measure zero” in the set of all possible Rindler vacuums. This means that the scenario is just about as unlikely as is possible without ruling it out summarily. D. H. Coule agrees in his summary of quantum gravity models, referring to the Gott–Li model as “rather contrived.” Secondly, the Gott–Li vacuum is not stable, given more realistic physical force fields. The (Rindler) vacuum stress-energy of a nonconformally coupled scalar field, or a conformally coupled massless field with a . . . self-interaction will diverge on the chronology horizon for all values of the Misner identification scale, which is the parameter that Gott–Li have fine- tuned. In addition, the vacuum polarization of the scalar field considered in the Gott–Li model diverges in all cases leading to the Thorne effect cited earlier, even for the conformally invariant case examined by Li and Gott. Hence, the regular behavior found by Cassidy and Li and Gott holds only for a conformally invariant, non-interacting field, and only for the stress- energy tensor. While some fields in nature (such as the electromagnetic field, before interactions are added are conformally invariant, others – notably gravity itself–are not; and interactions are the rule, not the exception.

Also, in Misner space the Gott-Li model is only possible with identification scale b = 2π, or b = 2πr0 for the multiple de Sitter case. Such an exact value is itself inconsistent with notions of quantum uncertainty”. So the Heisenberg uncertainty principle of quantum mechanics (QM) would guarantee that the relevant parameter could not be “just-so.” But if it is not “just-so,” then the universe collapses into a singular condition in the presence of a time machine. Another problem is that this parameter, called the “Misner identification scale” is not a constant. Rather it is likely to change dynamically as a function of matter couplings or energy potentials. As soon as it does, the CTC will destabilize. Interestingly, Gott and Li used similar objections when arguing for their model at the expense of the “creation from nothing” approach of Vilenkin and Hartle and Hawking. Gott and Li criticize the “creation from nothing” approach on the grounds of the uncertainty principle and the fact that their competitors are not using realistic force fields; that is to say, the Vilenkin approach is not close enough to what we expect for the real universe. Yet their own model appears to break down when similar objections are leveled against it

This is again a model William Lane Craig discusses, but then nobody ever accused you of being familiar with his work or his arguments.

As for the suggestion that the universe creating itself, this is a patently absurd claim. In order for something to create itself, it would first have to exist in order to bring itself into being from non-being… which violates the law of non-contradiction. Sure, an eternal universe is possible, but our universe began to exist… hence all the attempts to explain how it came into being.

Claim nine: most physicists are atheists.

And? I’m not sure about just physicists, but recent surveys show that approximately, a third of all scientists are theists, whereas one third are agnostic, and one third are atheists. Although physicists and engineers are less likely to believe in God than mathematicians, biologists and chemists. Interestingly, most theists who study science choose rather to become medical doctors. Roughly 76% of medical doctors are theists.
http://religion.ssrc.org/reforum/Ecklund.pdf
http://www.sciencemag.org/content/277/5328/890
http://www.nature.com/nature/journal/v386/n6624/abs/386435a0.html
http://blog.beliefnet.com/roddreher/2010/04/science-vs-religion-what-do-scientists-say.html
http://chronicle.uchicago.edu/050714/doctorsfaith--.shtml
http://people-press.org/2009/07/09/section-4-scientists-politics-and-religion/

Of course, I would not be particularly bothered even if they were atheists, given that the reasons given by atheist scientists for rejecting God are about as poor as atheist lay persons. Their opinions on fields outside of physics tends to be rather erroneous, for example Victor Stenger and Lawrence Krauss both labour under the delusion that, somehow, Jesus never existed and was ripped off of pagan religions. Even though no historian alive holds to such positions. Well, apart from fringe writers such as Robert Price and Richard Carrier, but nobody takes them seriously. Allow me simply to quote the consensus:
"There is, lastly, a group of writers who endeavour to prove that Jesus never lived--that the story of his life is made up by mingling myths of heathen gods, Babylonian, Egyptian, Persian, Greek, etc. No real scholar regards the work of these men seriously. They lack the most elementary knowledge of historical research. Some of them are eminent scholars in other subjects, such as Assyriology and mathematics, but their writings about the life of Jesus have no more claim to be regarded as historical than Alice in Wonderland or the Adventures of Baron Munchausen." - George Aaron Barton, Jesus of Nazareth: A Biography (New York: Macmillan, 1922) px
"An extreme view along these lines is one which denies even the historical existence of Jesus Christ—a view which, one must admit, has not managed to establish itself among the educated, outside a little circle of amateurs and cranks, or to rise above the dignity of the Baconian theory of Shakespeare." - Edwyn Robert Bevan, Hellenism And Christianity (2nd ed.) (London: G. Allen and Unwin, 1930) p256
"Of course the doubt as to whether Jesus really existed is unfounded and not worth refutation. No sane person can doubt that Jesus stands as founder behind the historical movement whose first distinct stage is represented by the oldest Palestinian community." - Rudolf Bultmann, Jesus and the Word (New York: Scribner, 1958) p. introduction
"A hundred and fifty years ago a fairly well respected scholar named Bruno Bauer maintained that the historical person Jesus never existed. Anyone who says that today—in the academic world at least—gets grouped with the skinheads who say there was no Holocaust and the scientific holdouts who want to believe the world is flat." - Mark Allan Powell, Jesus as a Figure in History: How Modern Historians View the Man from Galilee (Louisville: Westminster John Knox, 1998) p168
"Most scholars regard the arguments for Jesus' non-existence as unworthy of any response—on a par with claims that the Jewish Holocaust never occurred or that the Apollo moon landing took place in a Hollywood studio." - Michael James McClymond, Familiar Stranger: An Introduction to Jesus of Nazareth (Grand Rapids: Eerdmans, 2004) p8, 23–24
"A phone call from the BBC’s flagship Today programme: would I go on air on Good Friday morning to debate with the authors of a new book, The Jesus Mysteries? The book claims (or so they told me) that everything in the Gospels reflects, because it was in fact borrowed from, much older pagan myths; that Jesus never existed; that the early church knew it was propagating a new version of an old myth, and that the developed church covered this up in the interests of its own power and control. The producer was friendly, and took my point when I said that this was like asking a professional astronomer to debate with the authors of a book claiming the moon was made of green cheese." - N. T. Wright, "Jesus' Self Understanding", in Stephen T. Davis, Daniel Kendall, Gerald O’Collins, eds., The Incarnation (Oxford: Oxford University Press, 2004) p48
"In the academic mind, there can be no more doubt whatsoever that Jesus existed than did Augustus and Tiberius, the emperors of his lifetime." - Carsten Peter Thiede, Jesus, Man or Myth? (Oxford: Lion, 2005) p23
"The very logic that tells us there was no Jesus is the same logic that pleads that there was no Holocaust. On such logic, history is no longer possible. It is no surprise then that there is no New Testament scholar drawing pay from a post who doubts the existence of Jesus. I know not one. His birth, life, and death in first-century Palestine have never been subject to serious question and, in all likelihood, never will be among those who are experts in the field. The existence of Jesus is a given." - Nicholas Perrin, Lost in Transmission?: What We Can Know About the Words of Jesus (Nashville: Thomas Nelson, 2007) p32
"To describe Jesus' non-existence as 'not widely supported' is an understatement. It would be akin to me saying, "It is possible to mount a serious, though not widely supported, scientific case that the 1969 lunar landing never happened." There are fringe conspiracy theorists who believe such things - but no expert does. Likewise with the Jesus question: his non-existence is not regarded even as a possibility in historical scholarship. Dismissing him from the ancient record would amount to a wholesale abandonment of the historical method." - John Dickson, Jesus: A Short Life (Oxford: Lion, 2008) p22-23
"The data we have are certainly adequate to confute the view that Jesus never lived, a view that no one holds in any case." - Charles E. Charleston, in Bruce Chilton & Craig A. Evans (eds.), Studying the Historical Jesus: Evaluations of the State of Current Research (Leiden: Brill, 1998) p3
"I think the evidence is just so overwhelming that Jesus existed, that it's silly to talk about him not existing. I don't know anyone who is a responsible historian, who is actually trained in the historical method, or anybody who is a biblical scholar who does this for a living, who gives any credence at all to any of this." - Bart Ehrman, interview with David V. Barrett, "The Gospel According to Bart", Fortean Times (221), 2007
"Frankly, I know of no ancient historian or biblical historian who would have a twinge of doubt about the existence of a Jesus Christ - the documentary evidence is simply overwhelming." - Graeme Clarke, quoted by John Dickson in "Facts and friction of Easter", The Sydney Morning Herald, March 21, 2008
"... only the shallowest of intellects would dare to deny Jesus' existence. And yet this pathetic denial is still parroted by 'the village atheist,' bloggers on the internet, or such organizations as the Freedom from Religion Foundation." - Paul L. Maier (Western Michigan University), “Did Jesus Really Exist?”, 4truth.net, 2007, http://www.4truth.net/fourtruthpbjesus.aspx?pageid=8589952895 (Accessed June 8th 2011)

Even some of their statements when talking about physics are misleading, such as things being “uncaused” or “coming into being from nothing” when really they mean “indeterministic” and “emerging from the quantum vacuum.” Another example is how Lawrence Krauss claimed that 2+2 can equal 5, for “very large values of 2.” However, the quantities he refers to, namely 2.5, cease being 2, but are instead the new value of 2.5. Taking 2 apples and adding 2 apples will not magically get you 5 apples… unless you get two additional halves of an apple. In reality, I assume that most have simply never seriously considered the notion of God and could probably do with some philosophical training too. However, even if every scientist believed in God that would not make a convincing argument for God. So, even if we grant your claim, why think that that is a convincing argument? That is simply an appeal to authority combined with the bandwagon fallacy and two fallacious arguments do not make a sound one.