There are a few theories on how the universe, the earth and life itself came about. There is evolution:
Evolution is change in the heritable characteristics of biological populations over successive generations. These characteristics are the expressions of genes that are passed on from parent to offspring during reproduction. Different characteristics tend to exist within any given population as a result of mutations, genetic recombination and other sources of genetic variation. Evolution occurs when evolutionary processes such as natural selection (including sexual selection) and genetic drift act on these variations, resulting in certain characteristics becoming more common or rare within a population.
[Source: Wikipedia: Evolution].
There is also Creationism:
A doctrine or theory holding that matter, the various forms of life, and the world were created by God out of nothing and usually in the way described in Genesis.
I think that there are other beliefs as well, but right now, I don’t remember what they are, and I am discussing these two at the moment. I will discuss both, but will have to do so in parts. In Part 1, I discussed the eye and how it works, and how it does not support the evolution theory. In this post, I will talk about the eye further as there is more to discuss on that topic than I previously thought. I will continue from where I left off in the first post. I will mainly be quoting out of By Design, a book written by Dr. Jonathan Sarfati, as he puts it best.
EFFICIENT IMAGE PROCESSING
The eye would be little use without the brain to make the final interpretation of the image. And here there are more amazing features.
Fovea and saccades
Only a small (<1%) part of the eye in the centre, called the fovea, has very high resolution for fine detail. It sees only the central 2° of the visual field, or about twice the width of your thumbnail at arm’s length. The fovea has a higher density of receptors, and needs a much larger area of our brain to process its information – over 50% of the visual cortex.
But most of the eye’s area is used for the peripheral (non-central) vision, which has much lower resolution, and therefore needs less brain processing power. You can understand this for yourself by trying to read this page without moving your eyes. Rather, normally the low-resolution parts of the eye detect objects of interest, and our eyes have unconscious motions (saccades) to aim our foveas at these objects. This way we can see the details of a wide area with minimal brain computing power.
So why not simply have the whole retina in sharp focus? Because there is no point having much detail unless the brain can process it, and our brain would need to be 50 times larger to process such information! This would give only a minute advantage over our current system, where the peripheral area can pick out possible areas of interest, then zoom in the fovea to analyze more closely – with much less brain processing power. But the ‘superior’ design would have a major disadvantage in our head being unable to fit through doorways.
Also, there would actually be a disadvantage to seeing too well in the periphery. For example, it would also make it impossible to read, because if every word was in equal focus, they would all be attracting the reader’s attention – ‘pick me, pick me!’ – instead of being able to concentrate on a few words at a time. So, the lack of clear focus in the periphery is consistent with an intentional design of the eye-brain system, quite aside from the much more efficient information processing.
Also, a website on the Internet talks about some of the complexity of the eye. I am reproducing part of the article here:
Most research on vision has concentrated on the small central part of our visual field. This is the part where we see best; the part tested with the letter chart by your optometrist. And it really is very small. Hold your fist directly in front of you at arm’s length – that’s about the area that we are talking about. The vision outside this central area is a good deal less sharp. Evolution has resulted in an elegant solution. Vision is good enough in the periphery only to attract your attention to an object. To explore it with higher resolution, you can then look at it directly if you choose.
That’s not the only use we have for peripheral vision, however. Imagine the scene: you are enjoying dinner with a friend and are engaged in a stimulating conversation. Without taking your eyes off your companion, you reach for a glass of wine, pick it up and take a drink. This feels effortless, but the brain needs to form an accurate picture of the shape and size of the glass and where it is located. (Grasping does also rely on touch, but we first judge the object’s size and shape with our eyes and then reach out and adjust our grasp to match what we have perceived.)
Because human arms and hands are below our eyes, reaching out often takes place below eye level – as does the manipulation of objects. We and our research colleagues wondered whether the visual brain might have adapted to this through evolution, resulting in a visual system that is more finely tuned below than above eye level.
We tested this idea using a task in which we asked six young adults to compare shapes presented on a computer screen to see if they could tell the difference between a perfect circle and a slightly distorted circle (vision studies like these typically use small subject groups because of extensive testing per head). As we expected, they performed best when they looked directly at these shapes.
We then repeated the test for shapes presented above and below eye level, and also to either side. Sure enough, our subjects were much better at judging the shapes when they were presented in their lower visual field. In fact, they were more than 50% better at this compared to when the shapes were either above or to either side.
If this was an evolutionary development, it would make sense that our lower-field vision was better than our other peripheral vision only to the extent that it allowed us to process the shape but not the type of the object. To see whether this was indeed the case, we repeated the experiments with a specific type of object: faces.
Faces are considered to be a particularly important class of visual stimuli for humans. Humans are social animals and accurate face perception is central to social functioning. Good face perception allows you instantly to recognise familiar people, distinguish friends from foes and read the emotions of people you encounter.
Studies have shown that humans and other primates have evolved specialised brain areas that are dedicated to processing faces. While it is important to be able to detect a face in your peripheral vision, there would be no advantage to being able to discriminate between faces more accurately in the lower visual field than in other peripheral areas. Most of our encounters tend to be face to face, after all.
When we tested the same people again using faces, this hypothesis turned out to be correct too. We found that our subjects were no better in the lower visual field than elsewhere, and were actually slightly better at discriminating faces which were shown to the left. This apparent curiosity is probably due to the fact that humans’ face-specific brain area is located in the right cortical hemisphere. As with other brain functions, the right side of the brain controls the left side of the body, so this is what you would expect.
You can read the rest of the article here.
Now, the person/people who wrote this article believe that this is all the result of evolution, though they fail to explain how this is so. It demonstrates how amazingly complex the eye and brain are, and their relationship. Just think of the odds of this happening by chance through evolution. Too high, right? This indicates creation, not evolution.
Also, with the human eye, you have our colour vision. The following is directly quoted from By Design:
Our eyes have two types of light-detectors, rods and cones. The cones are mainly in the central part of our retina, and need bright light – they detect colour. The rods are in the peripheral part, and are good in dim light, but can’t distinguish colours.
There are three types of cones. One is sensitive mainly to red, a second to green, and a third blue. Each of them sends a signal to the brain if it detects light. But the signal by itself says nothing about colour, only about the brightness of the light it can detect. Yet from this simple system, we can distinguish millions of different colours. Here’s how:
If a small beam of red light hits three adjoining cones, only the red one will fire, sending a signal to the brain. But this signal doesn’t by itself say ‘red’ – it is only the lack of signal from the adjoining blue and green cones that makes the brain see ‘red’.
But what about yellow? Here, a beam of yellow light, wavelength about 580 nm (nanometres), will still land on three cones. But as they have a range of detectable wavelengths, both the red and green cones will detect the light. When the brain receives signals from adjoining red and green cones, it sees ‘yellow’. If the light is somewhat greenish yellow, the green cone will send a slightly stronger signal, so the brain sees a greener shade of yellow.
The brain can distinguish between many different wavelengths of light by how they affect the three types of cone. And if all three are fired equally strongly, the brain sees white.
Then, of course, you have animals eyes: (Also from By Design):
Chameleons: telephoto lizards
Chameleons have large eyes that can move independently. They also use a ‘telephoto principle’ to measure distances, which is most like man-made cameras but is unique in the animal world. Consider an old-style camera where you turn a dial to bring the object into focus – this can be a way of measuring the distance, by reading the distance setting of the dial when the object is focused.
To do this accurately, the image size of the retina must be large, and the chameleon’s eyes produces the largest of any vertebrate compared to its size. This large image is produced by an ‘astonishing’ negative lens, likely ‘unique among animals’, i.e. it makes light diverge rather than converge.
And the chameleon can see a sharp image from objects from almost any distance away. That is, its eyes can accommodate very well, so it can even clearly see an object just 3 cm away. In contrast, for people, objects become blurry if they are closer than twice the distance. And we really need objects to be 30 cm away before we can see them as clearly as a chameleon.
Lobster eyes – square facets and refractive focus
The eye of a lobster (and some other 10-legged crustaceans including shrimps and prawns) has a totally different method of forming an image from other creatures. The lobster eye shows a remarkable geometry not found elsewhere in nature – it has tiny facets that are perfectly square, so it ‘looks like perfect graph paper’.
This is needed, because the eye focuses light by reflection, unlike the camera-like eyes discussed above which focus by refraction (bending by light) by a lens. The graph paper appearance is caused by the ends of many tiny square tubes on a spherical surface. The sides are very flat, shiny mirrors, and their precise geometrical arrangement means that parallel light rays are all reflected to a focus at about half the sphere’s radius of curvature. The square geometry is crucial, because only with the reflectors at right angles can it form an image from light rays from any direction.
Also, only if the tubes are about twice as long as they are wide can they reflect most light rays off exactly two mirrors, so the light ends up travelling in a plane parallel to the incident one. This is a two-dimensional analogue of the corner cubes in familiar reflectors that reflect light back in the same direction.
Concentrating light from a relatively wide area is useful when it’s quite dark, but in bright light, there is cellular machinery to move opaque pigment to block all light rays to the retina except those parallel to the tubes.
My note: the lobster eye inspired human designer with computer chips, X-rays, etc. So far, the lobster eyes seems to be saying that it was created by an intelligent being. Then there is the eyes of ‘primitive’ water bugs:
Trilobites: exquisite eyes on ‘primitive’ water bug
The complex compound eyes of some types of trilobites, extinct and supposedly ‘primitive’ invertebrates, are among the most complex eyes of any creature that ever lived. They comprised tubes that each pointed in a slightly different direction, and had special lenses that focused light from any distance. The lens had a layer of calcite on top of a layer of chitin – materials with precisely the right refractive indices – and a wavy boundary between them of a precise mathematical shape. This is consistent with their being designed by a master physicist, who applied what we now know as the physical laws of Fermat’s principle of least time, Snell’s law of refraction, Abbé’s since law and birefringent optics.
So much for primitive!
There is also the Brittlestar that evolutionists have used to say evolution designed. Let’s take a look at it:
The brittlestar or serpent star is similar to a starfish, but has five waving arms attached to a disc. Although it doesn’t seem to have an eyes, it has a puzzling ability to flee from predators and catch prey. And it even changes colour from dark brown in daytime to grey at night.
Joanna Aizenberg, an expert in material science, especially biological mineral structures, at Lucent Technologies’ Bell Laboratories, led a team that solved this mystery. According to one report, its ‘entire skeleton forms a big eye…brittlestars were one big compound eye.’ They found that the brittlestar species Ophiocoma wendtii secretes tiny crystals of calcite (calcium carbonate, CaCO3) which formed ‘spherical microstructures that have a characteristic double-lens design’, and ‘form nearly perfect microlenses’. The array of microlenses focuses light a small distance into the tissues […] where nerve bundles detect the light. Brittlestar species that were indifferent to light lacked these lenses.
Quick note: the […] is not in the book. I put it there as there was some strange symbol in brackets there, but I was not able to do it on here. Back to the article:
The abstract of their paper states:
The lens array is designed to minimize spherical aberration and birefringence and to detect light from a particular direction. The optical performance is further optimized by phototropic chromatophores that regulate the dose of illumination reaching the receptors. These structures represent an example of a multifunctional biomaterial that fulfills both mechanical and optical functions.
Aizenberg used much easier-to-understand language when explaining to reporters the nuts-and-bolts significance of what the team’s findings actually mean. For example, she said that the visual system of lenses in the brittlestar is far superior to any manufactured lenses:
‘This study shows how great materials can be formed by nature, far beyond current technology,’ said Dr. Aizenberg. She went on to point out: ‘In general, arrays of microlenses are something that technology tried a couple years ago. Nobody knew something like that already existed in nature.’
Commenting on the brittlestar discovery in the same issue of Nature, Roy Sambles, of the University of Exeter’s Dept of Physics, explained that:
- There has to be ‘exquisite control’ of the calcite growth to form the lens structures.
- The calcite must grow as single crystals with the optical axis parallel to the axis of the double lens (to avoid birefringence effects)
- ‘each microlens should ideally have minimal optical aberration, and that seems to be the case.’
How did such a complex and intricate thing come to exist? Could it have been by evolution? That is what a Gordon Hendler says in National Geographic:
‘Thanks to evolution, they [brittlestars] have beautifully designed crystal lenses that are an integral part of their calcite skeleton. Those lenses appear to be acting in concert with chromatophores and photorecteptor tissues.’
[The article online has been deleted].
However, he fails to explain specifically how these creatures could have evolved ‘beautifully designed’ microlenses acting ‘in concert’ with other (incredibly specialized) parts of the body. His evolutionary ‘explanation’ is totally vacuous – where is even a proposed sequence of small changes guided by natural selection at every step, let along one demonstrated in the fossil record?
The brittlestar seems to be speaking of a designer – unless someone comes along with a demonstrated one in the fossil records [see above paragraph]. And Roy Sambles admitted:
‘Once again we find that nature foreshadowed our technical developments.’
So, against all the odds, this happened through evolution? Once again, if this is the case, PLEASE PROVIDE EVIDENCE!!! Because, unless that happens, this also seems to indicate creation.
It is also of interesting note that the eye has presented a challenge to evolutionists for a rather long time. But now they have claimed to have solved the mystery. But, they tend to focus only on the evolution of the camera-eye shape and BRUSH ASIDE MUCH OF THE BIOCHEMICAL AND ELECTRONIC COMPLEXITY. Scientific American published an anti-creationist article in July, 2002 which has been much publicized. Here it is:
‘Generations of creationists have tried to counter Darwin by citing the example of the eye as a structure that could not have evolved. The eye’s ability to provide vision depends on the perfect arrangement of its parts, these critics say. Natural selection could thus never favor the transitional forms needed during the eye’s evolution – what good is half an eye? Anticipating this criticism, Darwin suggested that even ‘incomplete’ eyes might confer benefits (such as helping creatures orient toward light) and thereby survive for further evolutionary refinement.’
As By Design points out, “it is fallacious to argue that 51 percent vision would necessarily have a strong enough selective advantage over 50 percent to overcome the effects of genetic drift’s tendency to eliminate even beneficial mutations.”
Scientific American goes on further to say: “Biology has vindicated Darwin: researchers have identified primitive eyes and light-sensing organs throughout the animal kingdom and have even tracked the evolutionary history of eyes through comparative genetics. (It now appears that in various families of organisms, eyes have evolved independently.”
They contradict themselves. So, if evolutionary history of eyes has been tracked through comparative genetics, how is it that eyes have evolved independently? Evolutionists say that eyes have to have arisen independently at a minimum of thirty times as there is NO evolutionary pattern to explain the origin of eyes from a common ancestor! Meaning, as eyes are not able to be related by a common ancestor, that’s the proof that they evolved independently many times. Since this must have happened a lot of times, it has to be relatively easy for eyes to evolve.
They lack evidence!
In 2007, Trevor Lamb and his colleagues at Australian National University synthesized these studies and many others to produce a detailed hypothesis about the evolution of the vertebrate eye. The forerunners of vertebrates produced light-sensitive eyespots on their brains that were packed with photoreceptors carrying c-opsins. These light-sensitive regions ballooned out to either side of the head, and later evolved an inward folding to form a cup. Early vertebrates could then do more than merely detect light: they could get clues about where the light was coming from…a thin patch of tissue evolved on the surface of the eye. Light could pass through the patch, and crystallins were recruited into it, leading to the evolution of a lens. At first the lens probably only focused light crudely…Mutations that improved the focusing power of the lens were favored by natural selection, leading to the evolution of a spherical eye that could produce a crisp image.
Some biologists use a computer simulation to explain the origin of the eye. See here. But, as one site points out:
This often-repeated tale sounds impressive at first, but it is not unlike most supposed explanations of the evolution of complex features. It scores high on imagination and flare but low on empirical evidence and thoughtful analysis. It most certainly does not represent a “detailed hypothesis.” Likewise, the simulation does an admirable job of describing how a mechanical eye could develop incrementally, but it is completely disconnected from biological reality. In particular, it ignores the details of how a real eye functions and how it forms developmentally. When these issues are examined, the story completely collapses.
The above mentioned article also talks about the eye and discusses the possibility that the eye came through evolution and discusses whether it is possible or not. It is quite interesting and please click on the above link to read it. I would reproduce it here, but I fear that this article would become far too long otherwise.
I hope you see now how evolution does not work for the eye. Please let me know your thoughts!
Until next time!