Skydivephil, whose video on Kalam I have already refuted, has another video on the Fine-Tuning teleological argument: http://www.youtube.com/watch?v=rt-UIfkcgPY
It managed to be even more egregiously bad, if you can imagine. This video is so bad, that I just couldn’t pass up the opportunity to smash it to pieces.
Claim One: The Fine-Tuning argument argues that because the world seems perfect for life, it must be finely tuned.
30 seconds or into the video and already we have our first straw man. It would probably help if you actually familiarised yourself with Craig’s works and arguments… since it seems painfully obvious that you have not. Afterall, you can hardly refute what somebody believes if you do not know what they actually believe. Simply assuming what other people believe… or worse, deliberately misrepresenting what they believe is not only fallacious but dishonest.
Let’s see what William Lane Craig actually argues. I shall simply take his form of the argument from his book, Reasonable Faith, which a competent debater would already be familiar with.
“What is meant by “fine tuning?” The physical laws of nature, when given mathematical expression, contain various constants (such as the gravitational constant) whose values are not determined by the laws themselves; a universe governed by such laws might be characterised by any of a wide range of values for these constants. Take, for example, a simple law like Newton’s law of gravity F= Gm1m2/r2. According to this law, the gravitational force F between two objects depends not just on their respective masses m1 and m2 and the distance between them r, but also on a certain quantity G which is constant regardless of the masses and distance. The law doesn’t determine what value G actually has. In addition to these constants, moreover, there are certain arbitrary physical quantities, such as the entropy level, which are simply put into the universe as boundary conditions on which the laws of nature operate. They are also independent of the laws. By “fine-tuning” one means that small deviations from the actual values of the constants and quantities in question would render the universe life-prohibiting or, alternatively, that the range of life-permitting values is exquisitely narrow in comparison with the range of assumable values.” – William Lane Craig, Reasonable Faith, 3rd edition, Crossway, (2008), p158
Craig goes on to note such examples. The universe is conditioned principally by the fine structure constant (or electromagnetic interaction), gravitation, the weak force, the strong force, and the ratio between the mass of a proton and the mass of an electron. Slight variations in some of these values would prevent a life-permitting universe. If either gravitation or the weak force were different even by 1 part in 10^100 then it would have prevented a life permitting universe.
There are also two other parameters governing the expansion of the universe, one relating to the density of the universe and one relating the speed of that expansion. If the expansion of the universe had been slower or faster by even one part in a hundred thousand million, million, million, then the universe would either have collapsed back in on itself or expanded too quickly preventing galaxies from forming. Other constants include the cosmological constant and the entropy per baryon in the universe, both of which are extraordinarily fine-tuned.
So, when we say fine-tuned we mean: the constants and quantities are just right for the existence of intelligent life.
And when we say life we mean: Organisms with the properties to take in food, take in energy from food, grow, adapt to their environment, and reproduce whatever form such organisms might take.
“In order for the universe to permit life so defined, the constants and quantities have to be incomprehensively fine-tuned. In the absence of fine-tuning, not even atomic matter or chemistry would exist, not to speak of planets where life might evolve.” - William Lane Craig, Reasonable Faith, 3rd edition, Crossway, (2008), p159
Fine-tuning falls into three categories:
i) The fine-tuning of the laws of nature.
ii) The fine-tuning of the constants of nature.
iii) The fine-tuning of the initial conditions of the universe.
Therefore, the argument can be formulated as:
1) The fine-tuning of the universe is due to either physical necessity, chance, or design.
2) It is not due to physical necessity or design.
3) Therefore, it is due to design.
It would help if you at least got the argument right.
Claim Two: We observe that the universe is finely tuned for life, because we are alive.
This is essentially the classic Anthropic Argument. Allow me to quote none other than Dr. Craig once more:
“The argument is, however, based on confusion. Barrow and Tipler confuse the true claim
A. If observers who have evolved within a universe observe its constants and quantities, it is highly probable that they will observe them to be fine-tuned for their existence.
With the false claim:
A’. It is highly probable that a universe exist which is finely tuned for the evolution of observers within it.
An observer who has evolved within the universe should regard it as highly probable that he will find the constants and quantities of the universe fine-tuned for his existence; but he should not infer that it is therefore highly probable that such a fine-tuned universe exist.” - William Lane Craig, Reasonable Faith, 3rd edition, Crossway, (2008), p165
Craig goes on to give the following example:
“You are dragged before a firing squad of one-hundred trained marksman to be executed. The command is given: “Ready! Aim! Fire!” You hear the deafening roar of the guns. And then you observe that you’re still alive, that the one hundred marksmen missed! Now what do you conclude? “I really shouldn’t be surprised at the improbability of their all missing because if they hadn’t all missed, then I wouldn’t be here to be surprised about it. Since I am here, there’s nothing to explain!” Of course not! While it’s correct that you shouldn’t be surprised that you don’t observe that you are dead (since if you were dead, you couldn’t observe the fact), nevertheless, it doesn’t follow that you shouldn’t be surprised that you do observe the fact that you are alive. In view of the enormous improbability of all the marksmen’s missing, you ought to be very surprised that you observe that you are alive and so suspect that more than chance alone is involved, even though you’re not surprised that you don’t observe that you are dead.” - William Lane Craig, Reasonable Faith, 3rd edition, Crossway, (2008), p165-166
Again, if you were familiar with Craig and his arguments, then you would already have known that Craig is aware of this “response” and has already replied to it.
Claim Three: The universe is actually largely hostile and devoid of life. Life only inhabits a small part of the universe.
And? Oh, I’m sorry, you seem to be suffering from the delusion that this counts as an argument against fine-tuning. How does this in any way change the fact that, in order for intelligent observers to exist, the constants and quantities of nature need to be extraordinarily fine-tuned? In fact, it actually serves to support the argument. In a universe that is overwhelmingly hostile, we manage to find life. Although that is a separate argument. In order for life to even be capable of forming, the universe needs to be fine-tuned. THAT is the current argument.
Claim Four: We don’t know the available range.
Except even if it were the case that the range of possible universes were very narrow, we would still be presented with many variables requiring fine-tuning. Moreover, in the absence of any physical reason to think that these values are constrained, we are therefore justified in assuming a principle of indifference and assume that the probability of our universe’s existing is the same as the probability of any other universe’s existing.
You blather on about the shuffling of cards, where the initial probability is 1 in 52, and this probability then increases with the shuffling of cards and so on. I find it curious that you seem to think that shuffling the cards has any effect on the likelihood of drawing a particular card. When we put the card back and shuffle them, there still remains the exact same number of cards. The only way probability would become lower is if we consider drawing the same card in a row. There is a 1 in 52 chance of drawing a specific card, but the chance of picking the same card many times in a row is much lower. I would be very interested in example of people getting the same card multiple times in a row, or rolling the same number on a dice multiple times in a row.
Of course, this does nothing to circumvent the probability of the constants of nature having the value that they do being extraordinarily low. Suppose that we take 10^17 silver dollars and lay them on the face of Texas. They will cover all of the state two feet deep. Now mark one of these silver dollars and stir the whole mass thoroughly. Blindfold a man and tell him he must pick up one silver dollar. What chance would he have of getting the right one? If we placed a single white ball in a sea of a billion, billion, billion black balls, if every time a black ball was drawn, what is the probability of getting a white ball?
More specifically gravitation and the weak force are both fine-tuned to one part in 10^100, the density of the universe is fine-tuned to one part in 10^60, the cosmological constant is fine-tuned to one part in 10^120 and the entropy per baryon in the universe is fine tuned to one part in 10^10^123. Oh, and the expansion rate of the universe is fine tuned to one part in a hundred, thousand, million, million. Of course, you think such odds are easy to overcome. Image then 10^60 bullets, one of which is a blank. You are told to select one, put it into a gun, hand the gun to an expert marksman, and then have them shoot you. I assume that you would have no problem with such odds.
Claim Five: The design hypothesis is an argument from ignorance, akin to saying “a magic man done it.”
Sorry, chump, but appeal to ridicule does not make a valid argument. Again, if you actually read Craig’s work, as opposed to pretending that you have, then you would know that the first premise of Craig’s fine-tuning Teleological argument is: The fine-tuning of the universe is due to either physical necessity, chance or design.
He then argues for the truth of premise 2, and the conclusion of the design hypothesis by means of appealing to the best explanation. In what way is this an “argument from ignorance?”
In order to falsify the design hypothesis, you need to either show that either chance or physical necessity is a more plausible option. Yet all we are met with is your incessant whining.
Claim Six: Fine-tuning really refers to circumstances when the parameters of a model must be adjusted very precisely in order to agree with observations. Scientific explanations of these values are therefore possible.
As if I needed further evidence that you don’t know anything about the subject whatsoever.
First of all, you conveniently don’t read out the following in the article of Craig’s you quote from:
“Most importantly, inflationary models require the same fine-tuning which some theorists had hoped to eliminate via such models.”
You seem to have this nasty habit of only reading out the parts you want to… all the whilst complaining that theists are the ones who quote mine. Ah, the irony.
Indeed, in order to produce such an enormous inflationary rate of expansion — and to result in the necessary values for our universe’s critical density — inflation theories rely upon two or more parameters to take on particularly precise values. So precise are these values that the problem of fine-tuning remains and is only pushed one step back.
Secondly, this article was written in 1996, whereas in the 3rd edition of Reasonable Faith, which was published in 2008, Dr. Craig says:
“Observations indicate that at 10-43 second after the Big Bang the universe was expanding at a fantastically special rate of speed with a total density close to the critical value on the borderline between recollapse and everlasting expansion.” – William Lane Craig, Reasonable Faith, 3rd Edition, Crossway, (2008), p158
He then notes the two values such inflation requires to be fine-tuned. You might want to get a little more up-to-date.
Thirdly, this only accounts for certain values, but not all of them, so we are still presented with constants and values with specifically fine-tuned values that require an explanation.
What then, of the multiverse that you appeal to? Whilst keen theoreticians talk about the multiverse as if it is a really existing thing that can be inferred from the evidence… when in reality, the only reason the multiverse is discussed at all is because:
a) certain models require there to be a multiverse just to make them work (thus violating Occam’s razor by needlessly multiplying entities beyond necessity) and
b) In order to multiply their probabilistic resources in order to reduce the improbability of the occurrence of fine-tuning.
Let’s quote some Paul Davies:
"The general multiverse explanation is simply naive deism dressed up in scientific language. Both appear to be an infinite unknown, invisible and unknowable system. Both require an infinite amount of information to be discarded just to explain the (finite) universe we observe." - Paul Davies, from Bernard Carr, ed., Universe or Multiverse?, (Cambridge: Cambridge University Press, 2007), p495
George Ellis:
“What is new is the assertion that the multiverse is a scientific theory, with all that implies about being mathematically rigorous and experimentally testable. I am skeptical about this claim. I do not believe the existence of those other universes has been proved—or ever could be. Proponents of the multiverse, as well as greatly enlarging our conception of physical reality, are implicitly redefining what is meant by “science.” – George F. R. Ellis, Does the Multiverse Really Exist?, Scientific American Magazine, July 19th 2011, http://www.scientificamerican.com/article.cfm?id=does-the-multiverse-really-exist (Accessed August 21st 2011)
However, let’s take the multiverse seriously for a second. In what way does this mitigate the problem of fine-tuning? There are, actually a two different types of multiverse. The first is an unrestricted view that every possible world does, in fact exist. A similar view proffered by Max Tegmark is that everything that exists mathematically, exists physically. The second type is more restricted, in that a multitude of universes are generated by some kind of physical process or “multiverse generator.”
There are, of course, arguments why such an unrestricted multiverse is implausible. First of all, you have needlessly invoked the existence of all possible universes just in order to explain the existence of our own. So now we have not just one, but many universes that require explaining, and we still haven’t gotten round to explaining why our own universe exists.
Secondly, consider the following scenario: rolling a six-sided die 100 times, and having it come up on six each time. This is an improbable scenario, for which we would normally demand explanation. We shall refer to this scenario as DTx, with D representing the particular die and Tx representing the sequence “coming up 100 times on six.” Normally, we would never accept the proposition that scenario DTx simply occurred by chance; it would require an explanation. The reason for this is not just because it is improbable, but because it also conforms to an independently given pattern. To borrow a phrase from William Dembski, this is a case of “specified complexity.”
Now, for any possible state of affairs S, such as DTx, the unrestricted multiverse implies that this state of affairs S is actual. With regard to explanation, for all possible states of affairs S, advocates of an unrestricted multiverse must claim that the fact that the unrestricted multiverse entails S either (i.a.) undercuts all need for explanation, or (i.b.) it does not. Further, with regard to some state of affairs S (such as DTx) that we normally and uncontroversially regard as improbable, they must claim that the fact that the unrestricted multiverse entails S either (ii.a) undercuts the improbability of S (since it entails S), or (ii.b) it does not.
Both (i.a) and (ii.a) would constitute a reductio ad absurdum of the unrestricted multiverse, since it would undercut both all justifications in science based on explanatory merits and ordinary claims of probability, such as the die example. Whereas if advocates of the unrestricted multiverse opt for (i.b) and (ii.b), then the mere fact that the unrestricted multiverse entails a life permitting universe neither undercuts the need to explain the life permitting universe, nor its overwhelming probability. Thus the unrestricted multiverse is a self-defeating hypothesis.
Thirdly, you earlier ridiculed appeals to the design hypothesis as tantamount to suggesting “a magic man done it,” yet how is appealing to an unrestricted multiverse any better, even if we grant the baseless supposition that inferring design is identical to appealing to magic? I would think that the unrestricted multiverse hypothesis is even worse then appealing to a “magic man.”
Lastly, and ironically enough, the unrestricted multiverse hypothesis implies that God exists, since the unrestricted multiverse hypothesis is the view that every possible world exists:
1) It is possible that a maximally great being exists.
2) If it is possible that a maximally great being exists, then a maximally great being exists in some possible world.
3) If a maximally great being exists in some possible world, then it exists in every possible world.
4) If a maximally great being exists in every possible world, then it exists in the actual world.
5) If a maximally great being exists in the actual world, then a maximally great being exists.
6) Therefore, a maximally great being exists.
God is either necessary or impossible and since the concept of a maximally great being is intuitively and a posteriori a coherent notion, then it follows that He exists given the unrestricted multiverse hypothesis. Of course, there are reasons for concluding that God exists based on ontological arguments that do not rely on such an extravagant unrestricted multiverse hypothesis, indeed, even this version of the ontological argument does not rely on their being an unrestricted multiverse, or even a multiverse at all.
However, what of a more restricted version of the multiverse hypothesis? That is to say, there is a physical “multiverse generator” either via inflation or super-string cosmology and so on. Such an appeal, whilst more realistic, is even more problematic. First of all, there needs to be a plausible mechanism for the generation of universes within the multiverse. Let’s consider the inflationary multiverse. In order to explain the fine-tuning of our life-permitting universe, it would need to hypothesise one or more mechanisms or laws that will do the following four things:
i) cause the expansion of a small region of space into a very large region.
ii) generate the very large amount of mass-energy needed for that region to contain matter instead of merely empty space.
iii) convert the mass-energy of inflated space to the sort of mass-energy we find in our universe.
iv) cause sufficient variations amongst the constants of physics to explain their fine-tuning.
Conditions (i) and (ii) are met by two factors:
a) the postulated inflation field that gives the vacuum a positive energy density.
b) General Relativity dictates that space expands at an enormous rate in the presence of a large near-homogenous positive energy density.
Without either factor, there would neither be regions of space that inflate, nor would these regions have the mass-energy necessary for a universe to exist. Condition (iii) is met by a combination of Einstein’s equivalence of mass and energy E=MC2 and the assumption that there is a coupling between the inflation field and matter fields. Finally, condition (iv) is achieved by combining inflationary cosmology with super-string/M-Theory, which allows for 10^500 possible worlds. We thus see that for the four conditions to be met, then the multiverse requires these factors.
There is also fifth condition (v) which is, however, not met, and that is that the laws governing the “multiverse generator,” whether of the inflationary type or some other, must be just right in order to produce life-permitting universes, rather than just dead ones. Even though certain physical laws can vary in superstring/M-Theory, there are certain fundamental laws and principles that underlie superstring/M-Theory that therefore cannot be explained as part of a multiverse selection effect. For example, without the principle of quantisation, all electrons would be sucked into the atomic nuclei, and hence, atoms would be impossible. Without the Pauli exclusion principle, electrons would occupy the lowest atomic orbit, and thus, complex and varied atoms would be impossible. Without a universally attractive force between all masses, such as gravity, matter would not be able to form sufficiently large material bodies (such as planets) for life to develop or for long-lived energy sources such as stars to exist. We thus see that the multiverse simply pushes the question back, since it requires fine-tuning itself.
Thirdly, the inflationary multiverse runs into a major problem in explaining the low entropy of the universe. The inflationary multiverse postulates that our universe exists in a true vacuum stat with an energy density that is nearly zero; whereas earlier, it existed in a false vacuum state with a very high energy density. The false vacuum state is expanding so rapidly that, as it decays, bubbles of true vacuum, or “bubble universes,” though they are expanding, they will be unable to keep up with the expansion of the false vacuum. Each bubble is then subdivided into domains bounded by event horizons, each domain constituting an observable universe.
This is essentially a more grandiose version of Ludwig Boltzmann’s hypothesis. Among the many worlds generated by inflation, there will be some worlds that are in a state of thermodynamic disequilibrium, and only such worlds can support observers. It is therefore not surprising that we find ourselves in a world in a state of disequilibrium, since that is the only kind of world that we could observe. Of course, the same problems that were levelled against Boltzmann’s hypothesis can be levelled against the inflationary multiverse.
In a multiverse of eternally inflating vacua, most of the volume will be occupied by high entropy, disordered states incapable of supporting observers. There are thus only two ways in which observable states can exist:
1) by being part of a relatively young, low entropy world, or;
2) by being a thermal fluctuation in a high entropy world.
The objection then, is that is overwhelmingly more probable that a much smaller region of disequilibrium should arise than one as large as our observable universe. Thus, in the multiverse of worlds, observable states involving such an initial low entropy condition will be an incomprehensibly tiny fraction of all the observable states there are. If we are just one random member of an ensemble of worlds, then we should be observing a smaller world. It would be overwhelmingly more probable that there really isn’t a vast, orderly universe out there, despite our observations; it’s all an illusion! Indeed, the most probable state is an even smaller universe consisting of a single brain that appears out of the disorder via a thermal fluctuation. Thus, in all probability, you alone exist, and everything you observe around you, including your physical body, is an illusion. This is a bizarre paradox known as “invasion of the Boltzmann brains.”
Thus appealing to the multiverse to explain the fine-tuning totally backfires, as we are required to assume even more statistical improbability and fine-tuning just to make it work. Whereas God, not being comprised of any material parts, is simple, and much less complicated than an infinite multiverse.
Claim seven: the multiverse is testable, and there is evidence that bubble universes have collided with ours.
Oh really? In Feeney, et al’s paper (http://arxiv.org/abs/1012.1995), they outline a particular method of analysing the CMB temperature fluctuations. Projected on the 2-dimensional surface of last scattering, the leftover signal would have azimuthal symmetry. They assume that a bubble collision has left a mark in the CMB that consists of a slightly different temperature in such an azimuthal patch. They use an algorithm that analyses the CMB temperature fluctuations in three ways. First, it searches for areas with such azimuthal symmetry, then, secondly, it searches for edges where the temperature makes a slight step. Lastly, if such an edge is found, it looks for the best parameters to reproduce the findings. They first use fake CMB data to test their algorithm. This stage of the simulation is represented by the following skymap.
Each quadrant of this skymap shows the same area, just mirrored vertically, horizontally, and diagonally. The upper-left quadrant shows the patch with the temperature variation from the bubble collision without fluctuations superimposed and the upper-right quadrant adds random fluctuations. The lower-left quadrant shows the result of looking for patches of azimuthal symmetry, and the lower-right quadrant shows the result of looking for edges with temperature steps. When they analyse the actual data, however, their algorithm found azimuthal symmetry but did not find edges, and azimuthally symmetric temperature modulations are not unique to bubble collisions. Feeney et al's results are nothing more than evidence that there are some features in the CMB. Thus, this is not evidence of an inflationary multiverse by a long shot.
One particular question that casts doubts over whether or not this is really evidence of an inflationary multiverse is: if our bubble was subject to a collision, why was it just nice enough to reveal itself in these CMB findings, rather than wiping us out? Their paper starts from the assumption that the signal of a bubble collision is of such a particular sort that it merely results in a small temperature difference. How is it that eternal inflation would result only in such a signal that is just barely observable rather than something more catastrophic? Of course, all of this is moot, as it does not matter if a multiverse really exists or not, as it is simply just insufficient to explain fine-tuning, as has already been demonstrated.
Claim eight: the constants of physics change in various locations across the universe.
Even if this is the case, then this does absolutely nothing to undermine the fine-tuning argument whatsoever. The argument is not that the constants are unable to change, but that such constants need to be at a particular value in order for life to evolve and we still find ourselves in a “Goldilocks zone.” Instead of the whole universe being fine-tuned for life, humanity finds itself in a corner of space where the values of the fundamental constants happen to be just right for it. You would need to show that life has formed where this value is different. Simply showing areas where the constant is different is not enough as such a hypothesis is fully consistent. In fact, if the laws, and constants, etc. were different elsewhere in the universe, then that would explain why the universe is so hostile and why the life-permitting region of the universe is so small.
However, there are even more reasons why this isn’t a problem for the fine-tuning argument than this. First of all, it does nothing to falsify the contention that the initial conditions of the universe needed to be just right in order for life to form. Second, it does nothing to falsify the contention that these values need to be just right in order for life to form. Let’s say that in order for planets to form, constant X needs to be value Y and that in order for planets to stay together, constant X needs to be value Z. Once planets form, the value can change by any amount as long as it does not fall outside the range permitted by Z. Lastly, you would need to show that every example of fine-tuning is wrong, yet you have not even shown how the example of the fine-structure constant is wrong. The fact that alpha can be different does not have any bearing on the claim: alpha needs to have value X in order for life to form and even showing how this example is wrong would not automatically show how the other examples are wrong either.
You appeal to cyclic models, and say that they allow for varying values of the laws of physics. Since they allow for an eternally cycling universe, then every so often, a universe such as ours will appear in the cycle. Of course, there are problems with such cyclic models truly being eternal. I already addressed Loop Quantum Gravity, the Aguirre-Gratton model the Gott-Li CTC model and the Baum-Frampton model in my critique of your video on the Kalam Cosmological argument, but I shall briefly mention them here. Bojowald, open to the possibility of an irreversible rise in entropy as a function of time. So the fact that entropy rises, cycle by cycle, and would trip up a proposed past infinite cyclic model does not count against the viability of the loop quantum approach as a candidate for quantum gravity. Secondly, dark energy prevents there from being a truly cyclic universe whether it takes the form of a cosmological constant or of quintessence. The Baum-Frampton model itself requires fine-tuning in order to work not to mention other problems with the viability of the model.
The Aguirre-Gratton model denies the evolutionary continuity of the universe, which is topologically prior to t, and our universe. The other side of the de Sitter space is not our past. For the moments of that time are not earlier than t or any of the moments later than t in our universe. There is no connection or temporal relation whatsoever of our universe to that other reality. Efforts to deconstruct time thus fundamentally reject the evolutionary paradigm. As for the CTC model, it has unstable properties that prevent a CTC from being physically viable. Gott and Li’s solution to this, however, requires… you guessed it, fine-tuning in order to make it work. Thus, these models either do not establish an eternal universe, have internal problems that prevent them from being a viable option or require fine-tuning and so would only serve to push the question back further.
I shall now address the cyclic models I did not address in my critique. I did briefly mention Roger Penrose’s Conformal Cyclic Cosmology, but did not go into detail, so I shall do that here. The first is that the evidence that Penrose and Gurzadyan claimed supported the CCC model doesn’t. Penrose and Gurzadyan claimed to have found concentric low-variance circles at high statistical significance in the WMAP temperature skymaps. However, two independent studies were both unable to reproduce these results:
Penrose’s solution to the entropy problem is to suggest that the initial singularity is the same thing as the open-ended DeSitter-like expansion that our universe seems about to experience. According to Penrose, physically, in the very remote future, the universe “forgets” times in the sense that there is no way to build a clock with just conformally invariant material. This is related to the fact that massless particles, in relativity theory, do not experience any passage of time. With conformal invariance both in the remote future and at the Big Bang origin, he argues that the two situations could be physically identical so that the remote future of one phase of the universe becomes the Big Bang of the next. However, for this scenario to work, all massive fermions and massive, charged particles must disappear into radiation, including, for example, free electrons. There is just currently no justification for positing this.
Penrose’s CCC is also based on the Weyl Curvature Hypothesis and Paul Tod’s implementation of this idea within the general theory of relativity. Weyl curvature is a kind of curvature where the effect on matter is of a distorting or tidal nature, rather than the volume-reducing one of material sources. Penrose then suggests that the Weyl curvature is constrained to be zero, or at least very small, at the initial singularity of the actual physical universe. This mathematical technique is necessary to stitch a singularity to a maximally extended DeSitter expansion. Tod’s formulation of the Weyl Curvature Hypothesis is the hypothesis that a past-spacelike hypersurface boundary can be adjoined to a space-time in which the conformal geometry can be mathematically extended smoothly through it to the past side of this boundary. Penrose’s “outrageous proposal,” as he refers to it, is to suggest that we take this mathematical fiction seriously as something physically real.
The failing of this approach is the supposed correspondence between Weyl curvature and entropy. The correspondence between Weyl curvature and entropy seems clear enough when one is considering the structure of the initial Big Bang singularity, given its vanishingly small entropy state. But while the DeSitter-like end state of the universe also minimises Weyl curvature, its entropy is maximised. Like black holes obeying the Hawking-Bekenstein entropy law, DeSitter space has a cosmological horizon with entropy proportional to its area and it is generally believed that this state represents the maximum entropy that can fit within the horizon. Penrose regards the entropy of the cosmological horizon as spurious and invokes non-unitary loss of information in black holes in order to equalise the small entropy at the boundary. Penrose attributes the large entropy at late universe time to degrees of freedom internal to the black holes and suggest that in CCC, the universe’s entropy is renormalized so that we can discount the entropy contribution from the horizon when all black holes have evaporated.
However, the entropy of the cosmological horizon must have physical meaning, otherwise the physics of black hole decay, upon which Penrose’s scenario depends, would not work properly. Furthermore, black hole decay is actually a dynamic system that is the sum of the energy lost by Hawking radiation plus the energy gained by absorption of local matter created by thermal fluctuations due to the DeSitter Gibbons-Hawking temperature. These thermal fluctuations therefore suggest that the entropy of the cosmological horizon is a real physical manifestation. It therefore seems unwarranted to embrace Penrose’s position, and very few physicists have been persuaded by Penrose’s non-unitary brand of quantum physics. Whilst Weyl curvature is the same between the two states that Penrose wishes to say are identical, the entropy is not, therefore to the two states cannot be identical. Thus, CCC does not avert an absolute beginning and if it cannot be eternal, then this means there can only be a limited number of cycles, thus it does not explain fine-tuning.
The Veneziano-Gasperini pre-Big Bang inflation model suffers in that it fails to overcome the entropy problem. Furthermore, all the pre-Big Bang black holes should have coalesced into a massive black hole co-extensive with the universe by now, given infinite past time. Whereas, the Steinhardt-Turok Ekpyrotic cyclic model is subject to the Borde-Guth-Vilenkin theorem, and so has a beginning. There are also physical problems with such a model, as outlined here: http://arxiv.org/abs/hep-th/0202017
Claim nine: other forms of life could evolve given different laws/constants.
Sorry, but that just isn’t the case with the examples given. By life, as I have already explained, we mean organisms with the properties to take in food, take in energy from food, grow, adapt to their environment, and reproduce whatever form such organisms might take. One can wonder how such life forms would be able to evolve in a universe with no stars or planets, or a universe with no chemicals or atomic matter. This just only goes to show how vastly unfamiliar with the fine-tuning argument you are. Furthermore, weren’t you the one who earlier claimed that the universe is vastly hostile to life? Obviously not as hostile as you would have had us believe earlier.
Claim ten: many theists believe that life was created via a miracle.
And? There are atheists who believe in directed panspermia. That life on earth and even our universe as a whole was created by advanced aliens. This is simply irrelevant to the fine-tuning argument. Myself and other serious minded theists have no problem with evolution and abiogenesis.
Claim eleven: God could have made life possible in any universe.
God could have made a universe where Richard Dawkins was the pope and William Lane Craig was head of the British Humanist Association. What’s your point? Are you suggesting that it’s impossible for an omnipotent being to create the universe and all life within it in the manner in which our universe was created? You’re also assuming that God is omniderigent, i.e. all causing, which would undermine creaturely freedom. Not only is your argument irrelevant, it is also invalid. Just because God has the power to pluck every hair from you head, it is not follow that God has to exercise that power. Here’s a newsflash, not all Christians are Calvinists. I would try explaining to you Molinism, but I assume that it would be lost on you, given that you apparently have a hard time even understanding what the fine-tuning argument is even arguing.
This video of yours managed to be even more dismal, contrived, convoluted and mind numbingly stupid than your video on the Kalam cosmological argument. I think it is clear that you have no idea what you are talking about and haven’t even so much as bothered actually reading the arguments you are trying to refute. Then again, actually read what Craig, Robbins, et al. have to say on Fine-tuning would require you to actually read and think for yourself… Or you HAVE read their arguments and simply chose to blatantly misrepresent them and try and mollify them with non-answers and non-sequiturs. It amazes me that you are truly deluded enough to believe you even made so much as a scratch in the fine-tuning argument. I kind of felt guilty afterwhile, as it was so easy tearing your video to shreds that I was literally giggling with glee.