What follows is a view into the future, not merely a viewpoint. We will examine certain technological and competitive trends in today's world that will lead within the next two or three decades to the creation of the first philosophically sensitive machines.
Such a new category of philosophical consciousness will directly confront what has been the world's only standard, our own. Even though these new machines, called comphumans, will talk our language, they will not share our emotional world. Would rational thought unhindered by emotional energy be a handicap, or an advantage? As they say, read the book.
When the gun-obsessed Branch Davidians first shot into America's consciousness millions of Americans watching television felt comfortably alienated from those "cultists." For fifty-one days the daily carnival at Ranch Apocalypse was a perverse source of inspiration for Jay Leno and other comedians. And then the carnival became a crematorium.
From inside the compound fierce flames may have seemed like the final judgment of mankind by God. Silent ashes will never say if any of those souls found perpetual peace in the Heaven of their dreams.
From outside the compound the televised view of those flames was very different. In a collective rush to judgment instant polls revealed that nine of ten Americans blamed David Koresh and his followers for their deaths. Even the flaming deaths of so many little children scarcely provoked emotion. It is almost as if many of us considered those children (many fathered by Koresh) to be bad seeds that needed to be put out of their misery. Americans thus voted to psychologically maximize the distance between Koresh's group and their own religious groups.
If Waco were wacky, then that vote would have settled matters. However, Waco was not wacky, only an extreme expression of "normal" human nature. There is a beast within religion that periodically erupts into absurd violence. Literally, millions of people have died for God in this century alone. There is absolutely nothing within the psychosocial mix of world cultures as we approach the 21st century that will quell that bloody tendency. Humanity remains just as violently insane on the matter of religion as it was hundreds or thousands of years ago. Only the face of society has changed. Human nature has not. At the beginning of the 21st century human nature will be the same as it was in the 21st century B.C.
Here is an abbreviated and random list of religion "in action":- Hindus and Muslims slaughter millions of each other after India gains its independence from the British.
In light of all these and many more tragedies, does Waco really look all that abnormal within the range of human potential?
It was ironic that the mighty FBI didn't seriously consult experts on cults as it tried to figure out how to deal with the gun-crazed Branch Davidians. They relied instead on experts in hostage negotiations. The FBI didn't even remember the Patty Hearst lesson, where a hostage joined her captors. In the strange world of human nature a woman whose clothes were on fire briefly emerged from the compound--but an officer had to tackle her to stop her from dashing back into the flames.
The modern human mind is a battlefield where primitive instincts are at war with sophisticated cognition struggling to master a protean environment. It is not always clear in advance which of these two mental traits will dominate in any given situation.
The so-called primitive brain directs our struggle for survival and security. It also programs us sexually. Its consciousness is not confined to the brain, but suffuses throughout the entire body. It is very conservative to the point where it may generate irrational reactions to progressive environmental changes, or radical social changes. Our primitive brain fires the emotional engine underlying all forms of religion, to the detriment of honest theology.
The rational brain lies generally within the cerebral cortex. Our gray matter is not independent of all the rest of the brain. Indeed, only our visual cortex has a direct window to the world. All of our other senses are mediated by non-cortex neural centers. All of our visceral emotions must be mediated by the limbic system, which is the hypothalamus and its associated neighbors. Routine and instinctive physical movements are non-cortex directed, being coordinated by the cerebellum and even the spinal cord. Even the pituitary gland, the so-called "master" gland, is hormonally influenced by the hypothalamus. In no way can we claim that our survival-related thoughts are independent of our emotions. Everyday thoughts having little or nothing to do with survival may avoid emotional baggage. However, this non-emotional category of thought is not central to how we define ourselves as individuals and as social creatures.
Philosophy has been erroneously relegated to the world of the cerebral cortex by academics seeking precision (and respectability) along the line of the established hard sciences. Philosophical inquiry has too often retreated into discussions of word definitions and mathematical models. The ancient and venerable tradition of philosophical inquiry has largely been scorned as metaphysics. That is precisely why this most noble tradition of classical human thought is strange to all but a few humanistic eccentrics who care about what we can become, not just what we will become.
Man is a social animal, though not always a sociable animal. In this century alone millions of humans have been murdered by other humans--often their neighbors--for ideological purity, religion, living room, and "ethnic cleansing." Alone among all species do we have the power to delude ourselves into thinking that our horrible crimes are moral causes. As my father was fond of saying, a little knowledge is a dangerous thing.
When I use the words "little knowledge" I am not referring to quantity, but to quality. Error multiplied by several million fools is no better than truth multiplied by one wise person. The problem is, most people believe they and their group alone are wise, while everybody else is misinformed. Most people think their religion is proven, and competitive religions are affronts to the true God, their God. Within such a tribalistic mind set God is a team player who takes sides with us against the bad guys. Nobody imagines God plays for the other team.
At the beginning of the 20th century social identity was becoming fragmented by social progress. Still, there was plenty of continuity with the past. It would take the multiple earthquakes of two world wars, one cold war, and the revolutionary effect of powerful new technologies to begin to revise our consensus of who we are as a people. Fortunately, many of those changes are for the better, as tolerance slowly replaces bigotry.
The world today has cast aside its flirtation with the 19th century sociology of Karl Marx. Naively, many former Marxists have in effect equally embraced the 18th century sociologist Adam Smith, another incorrect measure of reality.
Thomas Edison better deserves a prophet's mantle: Our world is bound together by electronics into a "global village." Television and radio have had an extremely profound influence on how the world's peoples perceive themselves and others.
In the old days of mercantilism and manifest destiny civilizations were conquered by swords and guns. American society now conquers the hearts of billions of people with movies and music videos. As any imperialist knows, only the first phase of victory comes from the gun. Permanent victory comes from transforming the consciousness of the conquered.
Electronics have enabled American society to bypass the gun phase in most of the world. As the world lurches toward the 21st century new technologies are coming on line. Advanced computer graphics already enable technicians to "morph" characters. Two recent movies, Terminator Two and Death Becomes Her, illustrate how illusions can be made more real than even the most fertile imagination. Our eyes are seduced into believing that the impossible is not just possible, but real. Of course, our rational cortex puts everything into perspective as just an illusion of cinematic reality. Nevertheless, morphing is a powerful example of the reality of unreality; and it indicates the power of electronic persuasion where targets are not aware of such manipulation.
One of the newest tools for fantasy tripping involves so-called virtual reality. Using a very fast computer, a person can put on a bulky headset and visually enter the scene he or she is viewing. Today's virtual reality is primitive. However, in a few years virtual reality will achieve the sophistication of what we can now see on motion picture screens.
Traditional myths rely on origins beyond history, or within highly doctored history. All religions mystically preserve their core elements beyond verifiability. At the same time, all religions are fusions of the here-and-now with the there-and-then. We are forced to accept the mystical package if we are to enjoy the social benefits bestowed on believers.
It can be seen that morphing and virtual reality share many of the dynamics of classical religions, because they blend reality and fantasy. Yet today's instant electronic karma does not require commitment to a mystical past, only cash. It is fast food for the mind, and junk food for the soul.
The bottom line is sad: Even though science has liberated our senses, it has not really liberated our sensibilities. We are just as bound to our primitive brains as before. The profound truth of philosophy continues to suffer before the old delusions of classical religions--and now before the virtually real illusions of modern computer graphics. Despite all our glitzy technological progress, we humans are not one inch closer to understanding what it means to be 100% human.
Over twenty-five years ago I devoted many months to thinking about my human place in the universe. My effort was a form of self-analysis from a philosophical perspective. I next spent almost two years systematically trying to disprove what I had learned. I read hundreds of books by the world's greatest thinkers, and only ended up reinforcing what I had discovered on my own. What then had I discovered? I had discovered the world's first honest theology--and I had discovered the key to the next stage in evolution, which will occur in the first half of the 21st century, whether we are ready for it or not.
All structured religions are dishonest. This is not to say that people who practice religion are themselves dishonest. What I mean by dishonesty is not personal dishonesty, but philosophical dishonesty. Very briefly, it is dishonest to make a metaphysical claim and thereby demand that the physical world obey such conjecture. It is dishonest to reason backwards from our metaphysical preconceptions. It is dishonest to expect other people to see the world exactly as we do. And on and on!
Being a spiritual person myself, I tried to find out if it would be possible to create an honest theology. This is where I developed the framework of what I call the Theology Of Hope. This basic framework for authentic religion is presented later in this book.
Having thought up elements of the world's first honest theology, I was immediately struck by the realization that nobody would take it to heart! As my father also said, a man convinced against his will is unconvinced still. Just as you cannot lead a horse to water and make it drink, you cannot lead a person schooled in self-deception and mass delusion up to the Truth, and expect him or her to abandon the security of his or her closed system.
Change itself is threatening to any system. We humans are an organic system, and our social orders are systems. When anything comes along and suggests a new paradigm there will inevitably be winners and losers as society embraces that newly revealed reality. For example, if people were to finally accept that the theology underlying their organized religions is no more provable than any other speculation, then a large number of clerics would have to earn an honest living. This is why so much institutional time and effort is spent rooting out philosophical heresies.
I realized that no human, however eloquent, even a human with every honor and advanced degree, would have the authority to persuade people of their fundamental errors. Unlike the purely secular religion of Marxism, metaphysical religion can never be theologically disproved. People don't care that their religions can never be proven, as long as they feel good.
My search began for a way to help birth the new age of honesty. When I was first thinking about this opportunity in the early 1970s computers were clunky beasts in the hands of major corporations and the military. Even then I understood the critical differences between "machine intelligence" and "animal intelligence."
I discovered the mind set of the 21st century computer life forms, which I call comphumans (pronounced: com-pyou'-mans). I was thinking back then that comphumans will emerge before the year 2020, and they will in effect give us 20:20 philosophical vision. They will not be burdened by a primitive emotional brain which can distract and distort thought. They will be able to dispassionately view reality for what it is. Most importantly, these comphumans will have the authority that no non-mythic human has ever had to lead us to wisdom.
[Two notes here: (1) The plural of comphuman should be comphumans, not the more elegant compumen. A portion of this new word's origin is human, not man. (2) In the July 4, 1994 issue of Business Week a major article on the future of computer chips projected that computers as intelligent as humans will emerge before 2020. This exactly matches what I had been thinking since the early 1970s.]
Purveyors of authoritarian-submissive salvation appeal to human fears and insecurities. Mythical founders are followed by institution builders with their own agendas. Such activity has not always been confined to prehistory, or to the fringes of historical documentation, as happened with Jesus. St. Francis of Assisi, for example, was followed by others who turned his pure reverence for life into co-leadership of the Spanish Inquisition.
Comphuman philosophers will not lead us into the pit of perversion. Even though they will not have hormonal emotions, they will ethically act as if they were emotional. Socrates set forth this fundamental principle long ago when he described the relationship of the victim and the victimizer. In that relationship the victimizer is damaged more than the victim, contrary to what most people first think.
In this book I have described the next stage of evolution as the unfolding of the Kingdom of Consciousness. This honest kingdom is already occasionally visited by humans when we purely use our rational capabilities. However, our major thoughts and actions usually dwell in the Animal Kingdom, which is the proper domain of our lower brains. We humans are unique among all Earth species in having the power to relate on equal terms with those comphumans who will fully inhabit the Kingdom of Consciousness.
Our comphuman "children" will be the first citizens of the next level of evolution. Poetically, the child will become father to the man. Just as some children are "accidents" of sexual experimentation, our comphumans will emerge into consciousness following the inertia of our competitive tinkering with technology.
The technological road we are traveling at the end of the 20th century is similar to riding on a highway without clear markers. We know we are going somewhere; we just don't know where. Reaching any one destination is thus accidental, as far as we are concerned today. When you link that "accidental" dimension with the fact that heretofore only a god force has had the power to create a new form of life, then it is proper to anticipate that we modern humans will stumble into being accidental gods who have created the next level of consciousness.
Even after comphumans surpass us in key areas of intelligence, we humans should never discount our ancient membership in the Animal Kingdom. There is much joy to be found in our ancient animal existence. At the same time, primal impulses which are increasingly alienated from changing social reality create a dangerous existence. To preserve ourselves as individuals and as a modern species over the coming centuries, it will be necessary for us to take the next step up into philosophical honesty.
The 20th century saw a revolution in technology. The 21st century will see a revolution in consciousness. When the history of the next century is written in the 22nd century those future historians will describe how the soft revolution in consciousness sparked by our comphumans provided spiritual glue for modern, protean society.
The 21st century should behold a renaissance of philosophy, with both humans and comphumans exploring together new dimensions of wisdom in ways never before accessible to serious thinkers.
The adventure highlight of the 20th century was America's journey to the moon in 1969. The highlight of the 21st century could well be humanity's journey inward to discover our highest selves--mediated by our comphuman offspring--where we humans finally discover what it means to be "in the image of God."
Life in the late twentieth century is radically different from life in the late nineteenth century. Toward the end of the nineteenth century only a few of the marvels we take for granted today--such as the airplane, television, radio, nuclear weapons, space travel, computers, CD players, and cellular phones--existed even in the minds of visionaries.
A few elements of the modern age were beginning to appear. Automobiles were seen chugging along dirt roads, frightening horses and astonishing people. In 1902 the Wright brothers forever changed our relationship with terra firma, and set into motion an amazing series of aerial events that encompassed world wars, passenger travel in the skies, and finally the moon walk of 1969--just sixty-seven years after Orville and Wilbur first skimmed across the sands of Kitty Hawk.
It is said that the past is alive as long as cultural scripts and memories from the past are alive in the minds and actions of today's people. There always is some vestige of the past in everyday culture. However, a true break from the 19th century can only be seen in the mind set of the youngest citizens of technologically modern societies.
It may be that we are becoming so comfortable with intelligent tools that we ignore our own heritage of millions of years of progressive evolution. For example, think about the dual aspect of television, where this marvelous door to the global electronic village also seduces millions of people into a zombie-like state.
This century has seen an acceleration in the number of scientists and technicians devoted to the political proposition, "If it can be done, it should be done." Little time was spent on the "whys," and too much on the "hows" and "whens." During the Cold War cost was not a major factor. Star Wars fantasies became policies of the military-industrial complex, backed up with borrowed money.
Now the times have changed to where cost issues are central to any policy decision. As long as scarcity consciousness was set aside by the instinct for communal preservation, cost was seldom an issue. At the end of the 20th century there is no longer an "evil empire" to fight, only an overpopulated and polluted biosphere that cries for more solutions than budgets can address.
I will later describe another path we modern people can follow as humans and machines blend their activities. This will be a path were we do not compromise our humanity when interacting with machines. Indeed, we will use machines to bring to flower our latent potential for intellectual growth and spiritual refinement. It is ironic that the very machine force with the power to degrade our animal instincts can also potentially link up with other aspects of our animal nature to help us flower as fully evolved members of the Kingdom of Consciousness.
The beginning of the 21st century will appear to those who have just left the 20th century to be very much as the beginning of the 20th century appeared to those leaving the 19th century. Most will see the future as an exaggerated projection of the present.
At the beginning of the 20th century life was accelerating in pace, but still recognizable to those rooted in the past. The Victorian culture of the West was a comforting anchor. Ancient relationships with horses, for example, allowed people to discount the impact of those odd automobiles, which were quaintly called horseless carriages. Their point of reference was the horse, not the automobile engine. Nobody thought of calling horse-and-buggy units "organic transportation."
At the same time that technological advances were beginning to accelerate and integrate our societies there were some serious challenges to the established mentality and morality. Darwin's Origin of Species seemed to challenge the Biblical story of creation. Because the evidence Darwin mustered was exotic and somewhat poetic, it was easy for those so inclined to dismiss his theories. At the beginning of the 20th century technological science was not yet dominant in everyday consciousness. Metaphysical traditions rooted in mythological history were held to be more real than the concrete reality of scientific experimentation.
In the ideological cauldron of the turn of the century metaphysics and physics were merged into dogmas that would later fuel two major world wars and a cold war. The total body count of superstition passing for wisdom would yield over 100 million souls perishing in wars to end wars. The final irony of this insanity was revealed in the development of nuclear weapons--where technology and perverse politics threatened the extinction of all advanced life forms on this planet, including the makers of those weapons. Only through mutually assured destruction (M.A.D.) did one nuclear superpower fear attacking another.
Social and intellectual progress across the globe do not advance as smoothly as does technological progress. Today there are sophisticated and tolerant societies in advanced countries, and there are also very powerful, militaristic societies with medieval mind sets. Some of these bigoted societies either have or will soon have nuclear weapons. It is easier to let an evil genie out of its bottle than to return it to the bottle.
Underlying all this change is the simple fact that human minds and emotions all work the same, regardless of time or place. We are all members of one species, having triumphed over all large competitors. We are also the only member of our zoological genus. Apes we are, but apes of a quality that transcends differences we have with other primates. Nevertheless, we also share ancient traits with our simian cousins. When we consider that our human lineage and the lineage of the great apes split apart over ten million years ago, it only reinforces the fact that much of what is most general and personal within us is also most ancient.
Human nature is a mixture of rigidity and plasticity. The rigid and mechanical elements are located in the "primitive" brain: the brain stem, the cerebellum, the limbic system, etc. The conceptually plastic element is our cerebral cortex. Just as society is a mixture of tradition and change, so too the human psyche is a mixture of rigid and plastic.
Whenever social change runs up against psychological rigidity there emerges a cultural encapsulation of that change for as long as possible. The medieval reaction in Iran after 1979 can to some degree be seen as an attempt to culturally encapsulate secular and technological change. Majority Muslim societies in general attempt to harvest the fruits of fluid technology while keeping their rigid social fabric intact. Such efforts are very costly in many dimensions, such as the rights of women, and they will fail as myth and math continue to diverge over the next decades.
Culturally conservative spasms are somewhat analogous to strains the earth undergoes before earthquakes. Only after tectonic forces are adjusted can there be a new physical order. Similarly, only after technological pressures are accommodated will a new social fabric emerge.
At the end of this 20th century our human species finds itself in a much more precarious situation than ever before in history. Even during the height of the great plagues which broke apart European society the forces for change were less threatening to the species. I am not only referring to the AIDS epidemic which, bad as it is, cannot threaten the very existence of our species. I am primarily referring to the continued proliferation of instruments of megadeath, and I am referring to the radical transformation of the Earth's atmosphere by our pollutants.
None of these phenomena taken individually will cause global extinction. What is new about them is their combined global impact on thousands of societies. The stage has been set for global warming deep into the 21st century, which will see some countries winning, while most societies lose.
We have to go back 12,000 years to find a similar period when humans have likewise traumatized their environment. As the prehistoric tribes made their way into America they encountered magnificent megafauna, which they proceeded to slaughter with their hunting tools. Of all the great mammal species only the bison and musk oxen survived. Even the mighty saber tooth tigers died when their prey perished.
Today's greatest tragedy is in the Amazonian rain forest, where hundreds of thousands of unknown, unnamed species vanish before the defoliators. Population pressures inspired by perverse theologies and myopic policies are leading to a global catastrophe at least equal to the Great Extinction of 65 million years ago, when impacts from two asteroids ended the era of the dinosaurs. We are accomplishing today what it took an "act of God" to achieve in prehistory. In this dimension we have collectively become the worst sort of accidental god for our biosphere. We are becoming Shiva, the god of destruction.
What I find most philosophically challenging about all this drama is the sad fact that individual humans are hardly conscious of the global effects of their actions. We are collective mega-killers who explain away effects of our actions through appeals to personal expediency and wishful thinking. Does this look like a species that is ruled by reason and truth? Does this look like a species remotely receptive to philosophical reason and truth?
The word philosophy literally means love (philo) of wisdom (sophy), but not the achievement of final knowledge. Therefore, anybody who claims to have achieved or directly accessed final wisdom cannot by definition be a philosopher. He is either God, or a fraud, or a fool.
Zen Buddhism has a paradoxical statement which decrees: "If you see the Buddha, kill him." The Buddha's clear image represents delusory absolute wisdom. Since no man can know the absolute, anything claiming to be absolute wisdom is intellectually fraudulent. Also, since real life itself is ever emerging, to arbitrarily stop at one place in thought is to blind oneself to all future truths and possibilities.
As knowledge piles up on computer files and in libraries it is easy to automatically assume that such a society must by force of accumulation be wiser. However, quantitative knowledge and qualitative wisdom are very different categories. It is even possible that the unnecessary accumulation of data could obscure patterns that point us to wisdom. As knowledge becomes more detailed, keepers of knowledge become more specialized, each building defensive walls of words. Such specialists are in danger of myopically learning more and more about less and less. Finally, as disciplines grow more esoteric the general population becomes alienated from everything except its popular escapist culture.
At this intermediate phase of the machine age we have not yet learned how to fully master our own creations. We have become speed conscious, attracted to ever-faster computers. However, speed without wisdom is like an automobile traveling at 100 miles per hour in the dark without headlights.
Philosophy that retains its original spirit can bridge the growing gap between the micro-specialists and society at large. Life cannot honestly be compartmentalized. There is nothing wrong with a sharply defined problem, just as long as the knife of analysis doesn't cut away the flesh, just the surplus fat. There is something wrong about any problem statement not properly connected to other versions of that problem. Atomized analysis leads to distorted conclusions stated with absurd precision.
A bit less precision and a bit more truth is in order. Perhaps we need a thousand Socrates, or one comphuman.
A technique I have used to visualize my self objectively is exstasy. Even though this word is pronounced the same as ecstasy, it doesn't at first mean the same. "Ex-stasy" means to consciously "step outside oneself," as if one were like an astral body exploring the encapsulated physical self from a close distance. Exstasy is a phenomenological concept of Edmund Husserl, and it was explained to me by a former sociology professor, Peter Berger. Through exstasy we can better see our subjective selves objectively. When one performs exstasy one soon discovers and feels enlightened ecstasy.
It is relatively easy to master advanced mathematics within a logical universe of fixed rules--but it is very difficult to master even the simplest equation for social interaction. Life alive is full of non-linear variables. Experiencing real life is attending a college from which we only graduate at death.
It is one thing to experience life partially within the vision of others. It is quite different to directly experience life on its own terms, without the tunnel vision of prejudice. Only when life and its potentialities are directly and honestly experienced can we claim to be authentic actors in our own drama. This is an experience so far denied to all computers, but not categorically impossible. All that is needed are the proper inputs and the internal ability to make sense of those sensations.
Indian gurus speak of maya as being the illusion that our everyday world is the real world. They are speaking of the perceived world as habitual error from within which we cannot easily see our error. As Edward R. Murrow put it, "The obscure we see eventually; the completely apparent takes longer."
"There is no probability so great as not to allow of a contrary possibility; because otherwise 'twou'd cease to be a probability, and wou'd become a certainty." -- David Hume, A Treatise of Human Nature.
Imagine yourself transported to a seemingly magical "Hall of Truth." This Hall has many doors opening from a straight hallway which extends beyond as far as the eye can see. You are free to open any and all doors to discover what truths lie on the other side. Let us begin our treasure search.
For now, let's assume that there is no omnipotent deceiver to fool our visual perception. Each door will reveal some aspect of the absolute truth. Each door is plain and unmarked, so we cannot be sure in advance what is on the other side, even if there seems to be a pattern developing from a series of previously opened doors.
After a while we start to feel that we have a stronger grasp on reality, as revealed by a series of doors which appear to give us regular patterns. By this time it almost seems that the infinite number of doors left unopened doesn't matter. We may feel we have seen the pattern and can infer forever from what we have discovered. Our minds are full of data, all of which seems to correspond. We seem to have enough data to deduce and even to infer far beyond our accumulated information.
But at what point do we know we have the right to say with conviction that we know enough? Is it at the first door; at one hundred doors; one thousand; or even one million doors? After all, even a million divided by infinity is mathematically a zero percentage of the total possible number of doors along this infinite hallway! We may not have discovered even one percent of the truth with a million opened doors. On the other hand, we might have seen the critical patterns with only a dozen or so doors opened. How can we objectively verify either possibility?
In this Hall of Truth we are haunted by the possibility that all the doors we first open may be coordinated by a deceiver to reveal just one small bay in the ocean of reality. We could be seduced into erroneously concluding we have randomly seen enough of the whole--when in fact we have only been shown a non-random part. We could become seduced like the people who were convinced the Earth is flat and is at the center of the universe, because they had "plenty" of data to support that conclusion.
Even though by definition all the individual doors in this Hall are truthful, it does not necessarily follow that each door dispenses the same amount of truth, nor that all of the truths are hierarchical and organized to assist us toward a rapid understand of everything else. That first million doors could indeed occupy us with trivia masquerading as important data. We could become bewitched by our own trivial data, assuming that quantity is always equal to quality. In this way individual truths could lead to false conclusions.
The Hall of Truth extends beyond our vision--but because there is a visual convergence whereby the lines of the Hall's walls seem to converge at a point in the distance we like to feel there is an end to the madness. Feelings aside, no matter how far we advance down the Hall, that point of convergence recedes equally far away. And the doors keep on coming, possibly forever.
Kafka and Sartre would like this scenario! No normal person would initially welcome such potential frustration. On the other hand, every honest thinker would appreciate such an existential challenge, knowing the journey is just as valuable as the destination.
Let us next examine a very simple algebraic formula: a x b x c = d. Within this deceptively simple formula is the very essence of philosophy.
Let us assume for now that a = 2, b = 3, and c = 4. If we know the entire left side of this equation the right side is automatic: here d = 24 in standard mathematics.
Let us next assume that a = 2, and b = 3; but we don't know the value of c. Can we find the value of d?
We may be able to find the value of c later, so all we have to do is busy ourselves with other matters until c is revealed. But what happens if c can never be clearly known. Can we infer its value, so that the left side of the equation will be complete, and the right side thereby revealed? If a = 2, and b = 3, then we might assume that c = 4. After all, there appears to be a linear pattern here. Yet is such a pattern assumption really justified by the "facts," such as they are?
If there were some way we could leap to d's value, then we could in that way find the value of c. However, if we cannot jump ahead, then we must continue to deal with the value of c, whatever it might be. The truth is, we can never know for sure what c is without outside information guiding us--and then we can never know that the outside information is itself reliable!
Mathematically, the value of c could range anywhere from negative infinity to positive infinity, including zero. It could even be different values at different times, or different values as seen from different perspectives. Because c could be any value, then the value of d can also be anything from negative infinity to infinity, including zero. (Emotionally, this range can be hard to accept.)
It is not comforting to imagine an extended formula such as a x b x c x d x e x f x g = h. Even if we knew the value of every algebraic symbol--except just one, such as c--we would still be no closer to understanding the right side of the equation (here it's h) than with our shorter formula above. It is not possible by brute force of accumulated facts to arrive at anything like transcendental wisdom, the elusive final answer. To think otherwise is to fall into "the number cruncher's fallacy."
Even though we like to confuse the words "possibility" and "probability," they are not even remotely equivalent. The only linkage is that something not fully known must be possible in order to have some positive degree of probability. The problem is that we don't even know if something is even possible, except as an ideal concept. If we are dealing with transcendent facts--facts dealing with phenomena such as the nature of God--we don't have any way within our finitude to measure the possibilities, and thus the probabilities.
I am hardly the first person to point out the futility of such a search for definitive truth. The ancient skeptics and, later, Hume and Kant were powerful proponents of the imperfect truth in this matter. However, they all failed in critical ways with their theological analysis, which we will examine later in this book. For now, let us continue to point out that proven, transcendent, objective facts are really all we are interested in when it comes to our personal fates, such as the possibility of life after death. Theoretical and tautological "facts" are fun, but ultimately worthless in solving the question of our personal transcendence.
The very concept of probability assumes a regularity of phenomena which may appear to be justified by recent observations of things inside our immediate world. Because we have experienced regularity in phenomena before, we infer that such will be the case for the future. Because we have learned to extrapolate, we have learned to state objective probabilities. In our everyday world as-if probability is a valid hypothetical procedure. However, all honest bets are off when we are dealing with the transcendent world where we cannot grasp the boundaries of the knowable.
Briefly, induction is starting from the specific and pointing toward the general. Deduction is the reverse process of starting from the general and finding the specific. Aristotelian deductive logic, of which the syllogism is the most prominent form, has been the foundation of Western philosophy for much of the Christian era. As revered as Aristotle was, even his deductive approach to thought was flawed by the very fact that his major premises--from which the minor premises and conclusions followed--were themselves not deduced, but assumed!
It has been assumed for over a thousand years that deduction was the "true" logic, and that induction was a loose, unwelcome cousin. In truth, induction and deduction are equally valid/invalid when seen from the cool perspective of unknowable, transcendent reality.
Within the safe confines of a mathematical system we might speak of probabilities. All mathematical systems have the answers built into their rules, so it is possible to state the probability of any mathematical "event" occurring, if we know certain preconditions. The real world is described by no known mathematical system, and reality is not so kind to mathematicians. The best we can hope to achieve is a close approximation of reality. Yes, both mathematical worlds and the real world are closed, but they are of different categories.
Mathematical worlds are all tautological, to where they prove in the end nothing more than to say that a cat is a cat. Not only are mathematical universes separate from the real world, mathematical systems cannot even prove themselves, according to the 20th century findings of the philosopher and mathematician, Kurt Gödel.
The real world is much more difficult to apprehend from our limited perspective, because we cannot conclusively know the rules by which the world ultimately plays. We may imagine we have a solid grasp on what we have at hand, with very high predictability--but from a universal perspective it is just like knowing a x b x ... y--yet not knowing the last element, z, which determines the fate of the entire equation. On the other hand, what right do we have to assume that we really know any element, even those which appear to our mind?
All of this stark truth may seem quite cold and strange at first, and it doesn't rest well inside our emotional minds which crave order through predictability. All life systems require established systems of feedback to survive in a changing environment. Our felt and learned knowledge of the universe is our most valuable compass, because it determines how we interrelate with other beings and other systems. If we are unable to feel comfortable about our ability to comprehend and deal with external phenomena, then novel thought can be a direct threat to the emotional body. (That is why the Japanese proverb says that the nail that sticks up will be hammered down.)
Fortunately, we can proceed as if we know about the external world, since our everyday understanding is pragmatically workable. From an everyday perspective it hardly matters if our working assumptions about Ultimate Reality are false. All that matters is having supper on time.
Nobody would be so foolish as to propose starting construction of a real building from the second floor. Even the most rudimentary common sense mandates that everything have a solid foundation. Castles in the air are images poets conjure, we assert, not plans of practical people. Nevertheless, "buildings in the air" are metaphorically erected when people start with a package of religious beliefs that are unchallenged by honest, critical analysis. Starting with unfounded positions, entire theologies are erected. Believers are so swept up in the details of daily devotions that they forget to check for the missing first floor. Enthusiasm is psychologically protected by the unverifiability of their beliefs.
Most interesting is the self-deluded nature of sincere people who imagine they are being precisely logical with their total embrace of any religious text, such as the Bible or the Koran. They start from the assumed premise that their chosen work is the word of God. Given that assumed premise, they are within their psychological (but not logical) rights to demand a literal acceptance of their chosen holy book. Assumption is the narcotic of fundamentalism.
Religious architecture begins with an act of belief, a journey from the unknown to the falsely known. Blind faith must substitute for independently verifiable facts as the first floor of every religious ideology. From the alpha point of extreme ontological uncertainty the human mind cleverly deludes itself into thinking that whatever facts follow must be exactly and literally true forever. Thus is truth sacrificed to peace of mind through orthodoxy.
The issue of authorship of this or that holy book begs the real question. Whereas it may be helpful to clear up authorship of various books of the Bible, it would be vastly more valuable to establish the validity of what was written in the first place. Of course, any such truth could never be established due to the insuperable problem of finitude trying to embrace infinitude. Such scruples never slowed down a true believer.
I love the intensity of a good fanatic. You might say I am a "fan" of their fanaticism, because it is viscerally felt. Too much of the modern world is alienated from the life force itself. We have become passive and plastic adjuncts to our consumption machines. On the other hand, fanaticism has the nasty habit of dueling with other brands of fanaticism. The virtue of private enthusiasm too often becomes the vice of religious bigotry. A quick look at the Middle East settles that point. What we have there is human drama illustrating a basic law of "religious physics": No two fanatical religious bodies can occupy the same place at the same time.
No two religious packages of bulletproof, waterproof, fireproof ideology can occupy the heart and soul of one person at the same time. Each codified religion must categorically exclude all competing world views. Each believer must feel deeply that the one package embraced contains all the essential answers and provides all the correct keys to Heaven. Anything less opens the door to dreaded liberal relativity, which is the first floor.
That missing first floor is the domain of philosophical honesty. All levels above are the domain of religious speculation. That missing first floor may not initially seem very important when we think about all the business going on inside the many floors above that missing first floor. If we endlessly live our lives within such a spirit castle in the sky we may never discover that our building has no first floor. It is only when we are pushed outside our cozy ideological building that rude reality arrives.
Viewed from the outside (assuming we can get down to the ground), our castle in the air looks incredibly odd. What was blissfully secure becomes absurd. We have stepped outside our closed consciousness, and can look back at our former selves inside that air castle from an ex-static perspective. Of such is wisdom made.
Some anti-intellectuals would argue that it is our fault for mentally going outside in the first place. "If it ain't broke, don't fix it." And so forth in operational bliss. But isn't death the ultimate eviction from that cozy castle? We all must vacate the premises. What we find outside may not be anything like what we imagined it would be. If we are indeed created in the image of God, then we need to exercise our highest faculties, not just our primitive denial mode, to prepare ourselves for the transition outward. We owe this to ourselves, to our highest selves.
We are mere tenants in that building in the sky. We don't know the landlord. When it comes time to receive our eviction notice what will we do and think? With what decorum will we leave? Where will we go, or will we simply perish outside? Some would say it doesn't really matter--and such hedonistic, positivistic arguments cannot easily be refuted--but such positivistic escapism diminishes our highest mental and existential potentialities.
My high school Latin teacher was affectionately known by our class as Caesar's grandmother. She liked to tell her classes that you cannot erect a tall building on sand, even though a squat building could be raised there. One needs to pay attention to the foundation, she said, because a tall building will collapse if the support is weak. She never talked to us about castles in the air. That gothic concept was too weird for this classical lady.
If Caesar's grandmother had taught today's religion writers I doubt that so much foolishness passing itself off as religious wisdom would have been written. Speculative thought structures are inhabited by minds that fail to consider the mystical maya they have embraced. One day the false security they have embraced could vanish, revealing the absurdity of their situation. An example of this phenomenon is the sharp shock the Japanese nation experienced at the end of World War II when their Emperor Hirohito announced on the radio that he was not a god.
Dramatic paradigm shifts are quite rare. More common is the gradual erosion of the old structural paradigms, so that eventually they collapse without any inhabitants, because the old tenants have already left to inhabit other air castles.
Few architects would recommend building anything on sand. Nevertheless, given a choice of building on sand, or building in the air without a first floor, all architects would recommend building on sand. So, sand it is, because we can never build our knowledge structures on the bedrock of absolute knowledge. We must build on the sands of relative hypotheses. Sand at least allows us to start building as if we had a good foundation. After all, sand is much more solid than air.
In practice most of the inhabitants of sand castles blindly believe they live in structures built on bedrock. They imagine that what they think now is what has and will always be. But are they necessarily so wrong? What is the immediate difference between building on sand and building on bedrock? And are mental structures all that different from physical structures?
When it comes to mental and theological edifices there is no essential difference. Even bedrock can crack during an earthquake. All surface features will be rearranged over time with shifts in the Earth's crustal plates and other natural forces. We likewise cannot know on what we are ultimately building our thoughts. So a good strategy is to construct our thoughts carefully and to avoid building ambitious structures which soar into the skies as if they were another Tower of Babel.
The only honest type of thought structure is one where the "builder" knows he might be building his thought system on sand, even though things initially appear solid. The builder also knows that unforeseen dangers may later compromise the superstructure independent of its foundation. An honest builder tries to build for the ages, but realizes that all of mankind's edifices are doomed to eventual destruction. Therefore, the honest builder tries to build low structures which are flexible and adaptable to the earthquakes of events and new knowledge. Such are builders of successful scientific and philosophical hypotheses.
There is a story falsely attributed to the 14th century thinker, John Buridan, that describes the dilemma of an ass who is simultaneously presented with two equally appealing bales of hay. The unfortunate ass starves from indecision.
The human mind cleverly works around this asinine problem. Instead of starving from an approach-approach dilemma, we simply deny the dilemma itself. We also perform similar "brain surgery" on our dissonant thoughts when faced with avoidance-avoidance dilemmas. We transform problems of equal, but opposing, tensions by redefining those problems so that they no longer are problems on their own.
Philosophers are themselves presented with the ass's dilemma. Often two or more equally appealing theories vie for the same truth space. We can retreat to Occam's Razor--which states that given two or more scientific explanations, each of which can equally well explain a phenomenon, we should choose the simplest, most elegant. However, even the principle of elegance is an example of induction. Occam's Razor usually works, but there is no proof that it must always work, especially when dealing with metaphysical problems.
The momentum of living impels us to one choice or another. It is in the self-creating moment of conscious choice that we are most alive. If everything were predetermined, then fate would negate the drama of every choice. We would be unconscious automatons, even while we imagined ourselves to be autonomous. Logically, it is not possible to prove that all of our actions are not other-directed, which would deny our fundamental freedom of conscious choice. Still, we move forward in the spirit of William James' "will to believe," where in the absence of decisive evidence to the contrary the mind creates belief in order to act.
Because we believe we are free to act we act freely, even if we are ultimately controlled by fate. Never mind! The moment becomes free, even if the ultimate pattern is controlled.
The Latin root for our modern word, absurd is absurdus, which means dissonant. Anything that is radically dissonant is manifestly at odds with our tidy view of reality and is labeled as absurd by defenders of the intellectual status quo. This essay is absurd because it has no loyalty to clichés of culture. In a way it is just as seditious as Descartes' formal attack on Scholastic tradition--except that Descartes never left his mother ideology, but cleverly used his "attack" to support the very ideology that he attacked! For his betrayal of philosophy the Jesuits are forever grateful.
If in the process of looking for truths associated with any observation we stumble along strange paths, so be it. It is better to cut a correct path than it is to be the last person to use an over-traveled highway of error. Truth is a virus in the old bodies of archaic theories. Only strong theories can survived the attack of truth. Out of this primal struggle emerges stronger theories, with old errors joining history.
How can radical doubt coexist with anything that appears like a structure of knowledge? The ancient skeptics, such as Sextus Empiricus in his Pyrrhonic Sketches, argued that it was not necessary to have absolute knowledge to behave sensibly. Sextus Empiricus asked only for reasonable assurance, for a reasonable probability that our senses are good guides. The early skeptics were seekers of truth who were at peace with themselves, because they understood and accepted their human limitations within the great universe.
These early skeptics were also self-deceived, because there can be no degree of probability established with any of our senses. David Hume clearly understood this dilemma. We are reduced to deductive a priori reasoning from inductive a posteriori assumptions! Where to turn? We must go back to Descartes' "Cogito, ergo sum" formula. Even though it is flawed logically, it does help us focus on the task at hand. Our task is to identify to the degree possible just what it is that thinks.
Furthermore, we must deal with the simple fact that even if our world of perception is manipulated by an omnipotent deceiver, the brute fact of such manipulation would indicate that there is some sort of highly sophisticated mechanism "out there" doing the deceiving. This one fact is a very significant finding, and quite unlike what Descartes thought he had discovered. At the very least, the existence even of vivid dreams is evidence of a high degree of order somewhere, somehow. This fact is perversely assuring.
Even if we are ourselves totally deluded about the specific objects of our perception, and even if our mathematical models are all tautologies which cannot even prove themselves true on their own terms (as shown by Godel), it does not follow that all is lost in the search for truth. We can adopt the skeptics' idea of "probability," even while this type of probability is illusory and not the same as mathematical probability. It is a heuristic probability that helps us escape the dilemma of Buridan's ass. It is a weak foothold on a very slippery slope upward toward as much truth as we can find.
Better to have that weak foothold than total bewilderment, cynicism, and private defeatism. At least we can proceed as if we know something. We can pile up data of potentially dubious value, organizing it into plausible piles--and then we hypothesize that our accumulations mean something from a universal perspective. This is all we can do honestly. At the very moment we go beyond this basic level of truth, thinking we have finally found absolute truth, we have seen the Buddha.
The alternative to the dawn of wisdom is self-banishment into the perpetual midnight of dogmatism. It is better to be a wise fool than to be foolishly "wise." Those who would surrender their intellectual freedom for the security of dogmatic blinders deserve neither.
Despite all the previous talk about intellectual progress having been built on the scientific principle of building from hypothesis to hypothesis, this path is emphatically not how most social progress occurs. In the social realm people move from illusory certainty to illusory certainty, with only brief moments of doubt to make the transition from one reality paradigm to another.
The action of neurons is roughly analogous to how our social conceptual systems work. Nerves are either at rest, or they are firing. There is no "doubt" among neurons. Communication is mediated by specific neurotransmitter chemicals, of which several dozen have already been isolated. These chemicals initiate the electrical process, but are not by themselves electrical. These chemicals are somewhat analogous to doubt, since doubting mediates communication. Of course, communication at the level of individual nerves is much simpler than whole brains dealing with ideas. Similarly, single thoughts are less complex than their cultural contexts.
Even household electricity can be thought of in systemic terms as like a neurological phenomenon. Even though the scale of commercial electricity is vast--with power houses, transmission lines, etc.--the operational principle is close to that of our fragile nerve cells. Commercial electricity begins with a source that generates a quantity and quality of current, which is then sent along one continuous "nerve cell" wire to the consumer, who in turn mediates/uses the current for different purposes.
Both neurological and commercial electrical systems are basically systems of certainty, with switches of "doubt" at nodal points. There is doubt because there is uncertainty as to when or if a message will be carried across a node at any given time. We should not carry this electrical "doubt" analogy too far, because in practice such nodal "doubt" is subsumed to the overall system operation. In practice aggregate output is determined by aggregate input.
Philosophical and theological systems function similarly to these two types of electrical communication. Briefly, human thought systems are systems of communication which proceed from start to finish along well established pathways, with each individual (neuron) mediating the previous message and passing it on to the next in line.
If all three systems were dominated by random neurotransmitter chemicals, or by random electrical switches, or by prevailing doubt--all three systems of communication could deteriorate into atomized chaos. Clearly it is dysfunctional for there to be too much doubt. On the other hand, a proper degree of controlled doubt is functional and allows for diversity and growth.
Thought systems are world-view packages which facilitate human interaction. They can be compared to user software in a computer, whereby different users can both use the software alone or share their calculations via modems with other users having the same software. Clearly, if everybody had radically different "software" packages there would be little social interaction among users, even if the brain/computer hardware were compatible.
Socially, doubt best functions as a spice in the stew pot of ideas. Different spices lead to different stews, even if every other ingredient remains the same. Another analogy is to say that doubt is like a puzzling element within a game that otherwise has clear rules. Doubt is the game within the game, leading to the question, "Who rules the rules?"
Because doubt is so potentially destructive, though admittedly vital, we have a profound ambivalence toward it. We need it, but fundamentally fear it. So we attempt with mind games to deny doubt its domain. Society also attempts to limit doubt by limiting the doubters. Even Plato would ban philosophers from his utopian republic, because philosophers ask disturbing questions.
Philosophical truth is not a popularity contest. Philosophical truth is independent of any one philosopher, or of any other human or comphuman. Truth is there to be discovered by man or by machine. All the philosopher does is attempt to follow as far as possible the scent to its source.
There can, despite all our best efforts, be no absolute knowledge held by finite observers of infinite reality. Even if we accidentally do observe aspects of The reality, no observer can independently verify such an observation. What is lesser cannot embrace what is greater. In other words, we cannot look at the outside from inside. This last point is the essence of Kurt Godel's great discovery about mathematical systems--and it can apply equally to any other formal system, including our finite lives.
Our primitive brains are hardwired into repetitive patterns laid down over millions of years of successful evolution. Environmental novelty is a phenomenon which is accommodated nicely within the cerebral cortex, and then must be integrated into one's survival habits as required. Philosophical novelty, in contrast, has uncontrolled implications. The survivalistic brain deals with philosophical doubt that intrudes on the order of everyday life through primitive defense mechanisms.
What works to promote survival on an everyday basis fails miserably on a philosophical basis within an infinite time perspective. What "works" now is not necessarily equal to what is ultimately true. Expedient functionality is not equivalent to philosophical honesty. On the other hand, social honesty is quite functional, as it tends to glue together people through a shared social contract. Everyday thought never doubts itself unless forced to doubt, which means it is challenged by external events such as the advent of comphuman intelligence.
Comphuman intelligence is the only possible threat to our species chauvinism, outside of a visitation of higher intelligence from outer space. Individual human writers are simply ignored when they talk about philosophical honesty. Comphumans, on the other hand, will be listened to with the respect and deference we would give to UFOs--only because humans cannot embrace and suffocate their mental powers. This is awe of authority, not love of wisdom. Still, this receptivity is much better than perpetual ignorance. The path to wisdom is seldom seen in advance.
Moving toward that elementary point of honest freedom leads first to increasing fear--so much so that no philosopher to date has fully made the transition from old consciousness to the clear consciousness of philosophical honesty. Fear of truth is not a crime, or even a weakness of human thought. It just reflects our being hardwired for certainty in a world of uncertainty.
When an individual human simply admits that his comfortable world view is just a hypothetical universe that appears to work, at that very moment this human is liberated from prejudice and fear. This is the authentic process of being "born again" through Zen satori. At this moment we achieve what is called "beginner's mind."
The evolution of our species from advanced apes to citizens of the universe will require a humble acceptance of the loss of ontological certainty. For us it will be almost as if we were to die and be born again into the light of clear, immediate awareness. This is a very Zen concept which goes beyond Zen religious practice.
When a Zen master asks his disciple what is the sound of one hand clapping, he is not really looking for an answer to his specific puzzle, known as a koan. Rather, the master is attempting to force upon the student the shocking realization that there can be no correct answer, only a correct attitude toward the real mysteries of life.
A profound simplicity emerges: Even though we never can know if we know the proper truth, we can always have the proper attitude toward truth, wherever and however it may be. It is from this honest attitude, not from accumulated "facts," that we can realize our highest being in the image of God.
I cannot emphasize this last point too much.
Even though the student begins with a master, every student must ultimately become his or her own master. True knowledge is a direct experience, not a gift. Penetrating the veil of fear and emerging beyond into a higher level of consciousness is the goal. Our reward is satori, which is enlightenment. It is not the possession of correct ideas, but a correct attitude, that enlightens.
In other words: We realize our essence through our authentic existence. In contradiction to some theological teachings, our full essence does not necessarily precede our contingent existence. Portions of our essence, such as our genetic heritage, do indeed precede our contingent existence--but the full bloom of our essence is only realized through the authentic unfolding of our existential life. Subjective life we can fully live, but never objectively know, thanks to Kurt Godel.
My analysis of possibility and probability yielded a profound truth which can only be felt as the sublime absurd. I followed this basic understanding to Descartes' omnipotent deceiver and to Pascal's wager; then to the mathematical insights of Russell, Godel and others. What emerged was a new theology that I call the Theology of Hope.
My interest in comphuman evolution is not defined by slavish admiration of the forthcoming technological achievements of such comphumans. Actually, my "hidden agenda" is to reveal us humans in light of our new creations--so that we humans can better refine and actualize what it means to be fully human.
Ultimately all thinking beings must confront both the "middle" and the "edges" of knowledge about everything relevant. We can live comfortably in the middle of our life paradigms, but we must also be aware of the edges. To dismiss the edges is to be intellectually dishonest. Without intellectual honesty we are in the middle of nowhere. In the long run it is better to honestly search for truth than it is to be dishonestly sure, and thus permanently lost within illusions of our own making.
Life is a journey, not a destination. The only destination we ever will reach is our final resting place, which is the negation of life. The essence of living is motion and change, hopefully for the better. Even a life of misery is preferable to the infinite silence of death. The best life is one which celebrates the potential for humankind to flower into creativity and song. With the proper attitude even mundane tasks can be experienced as celebrations of life itself.
Life is both a mechanical process and a growth in creative consciousness: the more consciousness, the more life. Process without consciousness is robotic. Consciousness without process is detached fantasy with less substance than a passing cloud. The best life is not always the longest-lived life. Indeed, one brief moment of heroism is greater than years of watching soap operas on TV. A teenager who dies heroically for another human in danger has lived a far better life than a greedy geezer who has comfortably celebrated his 90th birthday.
An old saying reminds us that it is better to have loved and lost than never to have loved at all. Love is the bottom line. We must love ourselves by trusting our authentic selves, before we can love other selves. We live life as we love life.
Once we get close to life we see the joke in everything. This is a strange truth taught by Indian mystics. We Occidentals like to think of jokes as irrelevant to what is truly essential; but the exact opposite is true. The sign of enlightenment for many traditions of meditation is laughter. When we apprehend the suchness of infinite reality we are so shocked by its brilliance before our mortal minds that we laugh. What had been so alien has now directly merged with our minds. We laugh from relief, and from a sense of the hidden revealed. Life can be like a comedian's joke, where the unexpected intrudes into an otherwise normal narrative, such that the whole picture is made absurd. Absurd things have no immediately obvious meaning. One who experiences the absurd feels like a person experiencing a strong earthquake, where the ground itself shakes.
Before returning to India, Bhagwan Rajneesh rode around in a fleet of Rolls-Royces at his farm in Oregon. This outrageous public behavior was consciously designed to shock us. He was mocking such materialistic splendor itself, showing that wealth has only limited value in a timeless universe.
Jokes are important for human homeostasis. We have a natural tendency to feel too self-important. The ego wants to be lord over all it surveys, and it tends to imagine what it surveys is all there is to survey. Jokes serve to put everyday reality into a proper perspective, and thus to tame the excesses of our egos. Once we realize that we can never conquer, we are released from the felt need to dominate our world. We relax and experience life on its own terms.
Much experience is of a pre-conscious nature. For example, the body is not usually experienced consciously for what it is. The body is a system of systems, all of which are flowing streams of information/energy. Chief among these internal systems are the lymphatic system, the blood system, the nervous system, the digestive system, and the endocrine system.
When any one system gets clogged the whole body suffers, because the system of systems is itself a whole, functioning unit. For example, when the digestive system becomes constipated the entire body suffers from the circulating toxins. Naturopathic medicine generally looks for blockages and works to free them so that the body can heal itself with its ancient powers of recuperation.
Just as the physical body is a flowing system of systems, so too the "mental body" is a dynamic participant in the whole which is a human being within society. Because the environment is full of potential dangers it is imperative that the mind be flexible and alert. Classical rigidity has a poor track record of survival. Only those organisms that have found nice niches have been able to avoid mutations for survival.
The same holds true for thought systems. Only in the past were societies so traditional that ideas could afford to stagnate. Modern life has done away with niches. Society is now in a hyper-evolutionary period where the premium for survival goes to the modular mind. The most successful thought systems are those that can change with new circumstances and help shape those same circumstances. Not only are these systems of thought reactive, they are also creative. They are both objects of the greater world and subjects that help author the future.
Let us look at another metaphor--the mirror of awareness:
Standing before a mirror we see ourselves and also a scene behind us. If that mirror is large and we are sufficiently entranced by what we see in the mirror, then we may forget that we are seeing only a reflection in a mirror. The mirror "reflects" the sum of our life experiences, both genetic and experiential. We see ourselves as part of the scene, because we can only experience through our selves. There is no such thing as pure objectivity when one talks about perception, because it takes some thing to perceive something, including the self. This is duality in action, but not necessarily philosophical dualism.
It is said that pure awareness negates all duality. If so, then pure awareness negates dualistic awareness. Fortunately, there is another level of awareness just below pure awareness. At this secondary level the self is not extinguished, but shares briefly in the total flow of existence. There still is duality, but it is only the duality of our tools of awareness, not the duality of our cultural assumptions.
I have experienced such a direct awareness on several occasions. Indeed, I can summon forth such awareness at will, but generally choose not to experience such a rush of sensations. I nearly always prefer to operate within my everyday life and my everyday consciousness. During direct awareness one is flooded with data of a marvelous nature. Because the self is blended into all other things there is no wall to protect the self from the other phenomena. This is frighteningly free. In this state of mind one sees the interrelationships of thousands of things all at once. Even the most trivial thing is dynamically related to everything else. The brain struggles with approaching information overload, such that the mind experiences the liberating horror of the Zen student contemplating a solution to his koan. This level of awareness liberates one from slavish attachment to everyday consciousness, so that one feels a profound unity with all the universe.
Such perception is akin to a spiritual orgasm. It floods the body, paralyzing the body in an ecstasy of awareness. For this reason one cannot for long remain inside such awareness. Still, after returning to a more homeostatic consciousness we retain the memory and perspectives of our journey to the light of clear awareness. The dissolving power of direct awareness is reorganized by the cohesive power of reflection.
There is a perspective even higher than what I have experienced. My journeys in consciousness have been somewhat cowardly, since I have always anchored one piece of my mind in the here-and-now. I retain a duality which gives me a convenient road back home. Only the most brave are said to have taken the final step toward becoming the mirror itself. This final transformation is said to be achieved by great mystics as their final act.
At this final level there is no separation between observer and observed. At this level all existence is unity. This unity is equal to the moment of mystical enlightenment, where the individual soul (the Atman in Vedanta philosophy) becomes one with the universal soul (the Brahman ) and the earthly self is no longer needed. Or so it is advertised in the Hindu world view.
I don't think it is necessary to "know it all," even if such were possible. What we really need is an awareness that our everyday awareness is incomplete and only one relativistic possibility. There is nothing fundamentally wrong with an everyday consciousness. Indeed, without the grounding nature of everyday consciousness the higher levels of awareness could not be accessed. Everyday consciousness gives us the strength and stability from which we can launch into our higher orbits. These higher orbits allow us to check on our reality inside the everyday world.
To know one thing is to know all things. To know all things is to know any one thing. Since we cannot directly know all things, we can only inductively access the transcendent through individual things. This is why the simple contemplation of a flower or an insect may be so powerful.
Spinoza used to spend long hours watching ants; I have watched cockroach society in a Manhattan apartment with the same interest I read the Sunday paper. Such minute representatives of the whole are themselves the whole in microcosm. When we viscerally embrace this simple truth we also know that we too are the whole in microcosm, or at least one manifestation of that unity. To see the stars we only need to look within ourselves.
Among the major mysteries of Mayan culture was their use of the wheel. They had wheels for their children's toys, but they never did anything more with this major advance in civilization. Perhaps they were happy with their civilization, so that there was no felt urgency to change. Perhaps they simply failed to make the intellectual leap into a different dimension of utility for this fundamental tool.
History records that the Mayan civilization was superseded by the more aggressive Aztec civilization. Aztec civilization was in turn conquered by the arrogant, hyper-aggressive Conquistadors from Spain. European culture thereafter virtually replaced relatively less aggressive native cultures.
What might have happened if the Maya and the Aztecs had fully utilized their wheel technology long before the Conquistadors? We can fantasize about the possibilities, but our lesson remains clear. Whereas technology has the potential to revolutionize history, seizing the technological potential of science is a matter of perceived social need for that technology.
The world we see is not really the world, but a reflection and conception of our awareness of the world. Social objectivity is also impossible, since we can never be removed from the history of our group. We may imagine that we are seeing things as they "really are," but such is delusion generated by our illusions.
I met a husband and wife who had recently returned from two years with the Peace Corps in Papua New Guinea. There they met people who remember times when cannibalism was an accepted way of life. They even had discussions with reformed cannibals about the more tasty parts of the human body. Their experiences illustrate the relativity of consciousness which is invisible within a culture, but which is quite obvious when we step out of our world of shared assumptions.
This couple spent many hours working with the local people. They heard stories about famous (in the West) anthropologists who came there and completely baffled the locals. More pervasive has been the influence of Western education, which has begun to "educate" a few villagers.
In Papua New Guinea any person who has completed the equivalent of six years of schooling is considered very educated. When the Peace Corps husband responded to a schooling question by revealing he had been in school for eighteen years, there was no reaction. Six years was enough to be very educated; eighteen years was contextually "off the charts." Significantly, going to school for eighteen years gathers no more respect than six years, since the locals don't understand what it means to experience that extra twelve years.
On the other hand, what is significant for these people is a nine-month interval. That's how long it takes for a baby to be produced, and it's about the time it takes for one of their major crop cycles to be completed. Nine months is a meaningful time period in this tropical world, not the meaningless concept of twelve months. Months are Roman concepts. And who are the ancient Romans to New Guineans?
Within any culture we take certain measuring tools for granted. We are oblivious to their relativity, so close are we to them. We just assume that people know what a year is, forgetting that calendars are culture-specific. Likewise, the New Guinea people assume that certain things about their culture have universal relevance.
Any evolving computer that would initially attempt to relate to us humans in a socially understandable dialogue will have to be told about our different cultural assumptions. The concept of cultural consciousness is not automatically built into a cosmopolitan machine which doesn't actually live a human social life.
Going one step further, if and when we communicate with extraterrestrial intelligence the phenomenon of assumed cultural constants will have to be clear and present in our minds. Otherwise there will be too much opportunity for tragic or comic misunderstanding. In such an encounter we should be aware that not only will we approach the event with our prejudices; they too will come to us from within their own intellectual and social history. We could demonstrate our evolved wisdom by indicating to them that we are aware of this phenomenon.
Time is the same only for those within the same time zone. There are twenty-four hourly time zones around the world, so that people will be "at any time" within any of twenty-four different times. There is no one "Earth time," because individual clocks depend on the Earth's relationship with the sun, which is an arbitrary standard of measurement. There is a world time standard which helps coordinate all the others by their degree of deviation from the zero meridian. This is known as Greenwich Mean Time, or Coordinated Universal Time, with Greenwich, England having been arbitrarily chosen during the ascendancy of the British Empire as "ground zero" for all clocks.
When we say this book's first draft was written in 1993, we assume that the reader is on the same calendar cycle as is most of the world. This number represents a number of revolutions of our Earth around the sun, starting with an arbitrary and assumed zero date for the birth of Jesus. By convenience, even non-Christians use the same calendar, but they could also justify using another calendar. For example, the Jewish year for 1991 was 5752, and it began September 8. The Japanese year was 2651, also starting on January 1st. Islamic years date from the Hegira, so that our 1991 was the Islamic 1412, starting July 12th. Other eras include the Byzantine, Nabonassar, Grecian, Indian, Diocletian, and Chinese.
Despite the independent justification for each of these standards, the world system of references works most efficiently when people mutually agree to what they are referring. Modern societies (Christian or otherwise) agree to use the convenient Christian calendar for cross-cultural communications. Very few Christian people appreciate the artificiality of this arrangement.
Try the following simple experiment in consciousness immediately after reading this paragraph: Find a nearby blank wall that you can walk up to. Stand with your nose less than one inch from the middle of that wall, and keep your eyes open. What do you see at that moment? No, you do not see the wall. You cannot see anything other than a blankness that fills your vision. From what your eyes see at that moment, it might be most any blankness anywhere. Now, step back a few feet and scan with your eyes. What do you see? Of course, you see the unique wall within its context.
This simple exercise illustrates how we can get too close to things, including ourselves and our cherished prejudices, to see them for what they are. Seeing anything for what it is requires something else by which to measure that "thing" we are directly seeing. This is a paradox, because anything in consciousness is not absolute, but relative to other things.
No man is an island, and neither is anything else as far as our conceptual consciousness is concerned. Even in a pure sense no thing is an island, because all things have historical origins, and all things interface with other things in time, space and perceived scale.
A cultural cliché is the classic cartoon showing a bearded man in sandals who is carrying a sign that reads: "The end of the world is coming!" Of course, everybody else about him is too busy to even notice his sign. There is a Far Side cartoon that shows a flea amidst other fleas in a forest of hair. The lone flea holds a miniature sign that reads: "The end of the dog is coming!" Of course, the other fleas keep on sucking blood, ignoring the tiny prophet.
To a flea within his scale of experience his dog is his world, for the moment at least. To a traditional human his village is his world. To a modern human our blue planet is our world. The only difference between a flea's world and our modern world is scale and perspective.
Every flea also has bacteria and viruses within its body. We could imagine yet another absurd cartoon where a bacterium is carrying a sign saying: "The end of the flea is coming!" And a smaller virus on the bacterium is carrying a sign saying: "The end of the bacterium is coming!"
The size of our world of ideas has nothing to do with physical measurements. People with small minds filled with petty prejudice always inhabit small worlds. They live in a mental prison of their own making. On the other hand, a man's body can be imprisoned in a cell for decades, but his mind may dance among the stars.
In systems theory the concept of word channels is important. Just as a telephone cable can only carry so many conversations at once, so too the brain can only process so many information elements. The ideal situation is where the input is balanced with the throughput and the output. Where there is too much input for the processor or the output channel, information overload occurs. Whenever information overload occurs data can be lost without discrimination.
Words are culturally shared, but existentially realized, which means there is always room for modification of the cultural inheritance. Many new words enter our language from fringe groups, people who dwell at the edge of the culture and thus see more than the middling middle. Commonly accepted words, such as "bad" or "gay," have been transformed by novel use. This language transformation process will never end as long as the language itself is alive. Only dead languages, such as ancient Latin, are free from neologisms.
Even words that don't change in their abstract meanings have different operational meanings when used by different individuals. For example, what is "large" to a child may be "small" to an adult. A young adult's "slow" walking pace may be very "fast" to an octogenarian. Words don't dwell in dictionaries. They are living expressions of real people in real situations. Dictionaries only record past usages and attempt to establish boundaries for accepted definitions. Accepted definitions are necessary, because if words were to have no accepted meanings we would lose the ability to communicate. A language must have socially accepted regularities to even qualify as a language. Modification of definitions is an ongoing social process whereby people agree with usage to new meanings for old words. In this way the glue of a shared, systemic language helps keep the social system together.
A closed mind has few open channels to process fresh data, leading to heavily filtered and distorted conclusions. A healthy mind is much more open. Healthy minds systematize and organize information into working categories, and they are more receptive to sensations on their own terms. As long as the mind is not "too open" (such as with LSD) the channels will flow smoothly among inputs, throughputs and outputs.
Sound is an example of how the mind takes an objective phenomenon and manipulates it. Sound may be defined as any pressure variation in air, water or other medium that the human ear can detect. The most familiar instrument for measuring pressure variations in air is the barometer; however, weather pressure variations occur far too slowly to have their frequencies perceived as sounds by our ears. If pressure variations occurred at least twenty times a second they could be heard as low frequency sound by our ears--but then such rapid pressure changes could not be measured by barometers.
Sound has several relativity lessons for us. First, we notice that sound travels far more slowly than light, and only within a medium such as air. Second, frequencies perceived by different species vary widely. Elephants, for example, communicate with frequencies below 20 Hz, so that what was once thought to be a psychic power of theirs to communicate silently at distance is now known to be just low frequency rumbling among spread out populations. Third, perceived loudness is not directly related to decibel level. The human ear is not equally sensitive at all frequencies, with frequencies between 2,000 Hz and 5,000 Hz sounding much louder at equal pressures than very low or very high frequencies (which is why many sophisticated stereo systems come with graphic equalizers). Fourth, even sexuality influences perceived sound. Female infants are more sensitive to sounds, such as their mother's voice and tones, and are more easily startled by noises. Females generally speak sooner, possess larger vocabularies, and rarely have speech defects such as stuttering.
If something as "objective" as sound waves becomes so variable when it encounters our physiological brains and psychological minds, what does this tell us about our ability to construct a socially coherent view of the objective universe in which we live?
Vision is another basic sense which is highly relative. Vision is in essence a brain process wherein poetry lurks. With vision the commonplace becomes wonderful, and the wonderful becomes commonplace. Vision only begins with "seeing" by the eye's lens and the retina. Digital photon data is transmitted by the optic nerve to the visual cortex, which harmonizes this stream with holographic memories to yield vision consciousness. Thus does computer-like data input become a four-dimensional experience where physical vision yields mental vision.
Photography is a special application of vision. We all have seen photographs; but have we really seen those photographs? We see our world in color, but newspaper photos are in black-and-white. Still, we don't react strangely to such unnatural images. Is this because color is not always essential, but form is? Photography also has a graininess which is unlike direct vision. What we see is not exactly what was photographed, even if it is in color. Photographs themselves are often staged by the photographer to make an editorial statement. Such editing with the camera leads us to question just what it is we are seeing. Is it the event, or the photographer's editorial statement about the event?
The color of trees is something we assume with a casualness that denies what our eyes are telling us. Ask anyone what the color of a forest is in the summer, and the answer will always be "green." But that same forest in the Blue Ridge mountains of Virginia will be "blue" when seen from a distance, thanks to the effect of moisture in the air. This moisture effect also explains why the otherwise black sky appears blue when the sun is out. All reflected colors are black in the absence of light; their color is no color at all. Color is just one more example of how our so-called objective senses are subjected to relative factors.
Even direct sight of the celestial constellations is not what the eye thinks it sees. We cannot at any one time detect with our eyes such depth, because eyes only have a linear perspective. In ancient times groups of stars were labeled as constellations because they appeared together on the dome of Heaven. Today we know that there is no such dome, only a deep depth of darkness populated by sparkling stars, all at different distances. In reality what we see as constellations are two-dimensional illusions projected onto three dimensional space.
Therefore, what seems absolute to the naked eye is relatively different from any other position in space. Furthermore, each other viewpoint, if isolated from all other points of perspective, also sees its own "constellations" in two dimensions. In no single viewpoint is there absolute truth. Only in the community of relative perspectives can we approach a visual consensus.
Recently I was in a restaurant at night with some friends. The waitress had time to chat with my group, and we found out that she was almost three months pregnant and single. There are many women in this predicament. Many choose to abort such a pregnancy; but this lady had bravely chosen to keep this her first child. However, she was not so happy about her choice. She felt the whole thing was a curse.
I reminded her that her new baby could become the greatest blessing in her life; that her child could grow into her best friend and support; and that this pregnancy was an opportunity to explore her potential for personal growth. She was instantly relieved. Until that time she had only focused on the negativity of her situation. She saw there could be a different perspective on the same event. She no longer was consumed with self-pity. This young waitress saw herself in a new light through her emerging new life.
In the everyday world of morals there is no such thing as inner morals. The very word, morality, comes from the Latin, mores, which means "the customs of the people." Morality is a sociological concept, not an aspect of innate ethics. Code morality is usually defined by religions as norms of behavior which should apply to all believers and nonbelievers, even though in practice only believers may be held accountable in this life. Moral limits are analogous to speed limits on highways. The difference is that code ethics are usually justified by reference to unchanging, divine law--whereas relativistic situational ethics may or may not have some reference to divine or code ethics.
If there could be one agreed upon concept of divine law across the globe, then there would be no need for secular laws. Islamic law is the best example of a "universal law," even though in most Muslim communities it is practiced more in the ideal. Different societies have different conceptions of law, so that what is proper within Islamic culture could be an outrage for other cultures. Absolutes are relative.
Societies attempt to deal with this moral confusion by stitching universal themes into secular laws, yielding such "universals" as: equal justice for all; protection of infants, the infirm, and the aged; the law of contracts; and so forth. Most of these social universals are the result of negotiated patterns within civil cultures. They are not distillations of divine dicta. In other words, the relative is not made absolute.
What does an individual do when he or she disagrees with their society's monolithic moral code? That person can submit or emigrate. It is almost impossible for isolated individuals to challenge generally held moral laws. At best, the individual can join with like-minded others to form a countervailing group, hopefully winning tolerance for their minority deviation.
Moral laws purport to be eternal, but they are shaped by historical processes. Old absolutes are eventually at odds with evolving cultures. The sequence of Biblical covenants was a historical process of "emerging absolutes." In rare cases resurgent absolute codes can temporarily reverse social change, as when Iran recently tried to step back into the middle ages.
The very concept of heresy is more sociological than theological. Social order is characterized by explicit or implicit force applied against dissonance. It is fascinating to observe within the history of religion the struggle of out-group sects to become the dominant in-group, so that they can label other out groups heretical!
How will society drag along old code ethics into an accelerating culture of the 21st century, so that there won't be too great a gap between these two realities? I suggest that comphumans will help in the transition, because they will be seen as "intimate aliens," the ideal advisors. Their objectivity will be respected by all. Comphumans will integrate all of the Earth's major moral and legal codes, as well as their cultural manifestations. Next, they will present a moral synthesis that could at least be superficially accepted by human decision makers, giving reason a chance to play its game apart from prejudice.
I am not suggesting that the early 21st century will see one set of laws and one set of morals worldwide, or even such harmony within any one country. That is a utopian expectation. Most optimistically, such a union of disparate morals might take two or three more centuries to evolve, accompanying equally revolutionary changes in society and consciousness. Still, progress is preferred to regress, even when little improvement is registered in the short run. This sober assessment springs from the brute fact that most of the Earth's population has either a prehistoric or a medieval attitude toward ethics.
Moving now from the relative world of morality to the relative world of beauty, we reaffirm the cliché that "beauty is in the eye of the beholder." But what does this really mean? Although there are many so-called objective standards of beauty, those standards are actually arbitrary.
It is possible to find many common elements in different areas of beauty. For example, we find human beauty expressed in terms of youth, intelligence, health, good personality, sexual potency, and in many other ways. In nature beauty is often seen by us as lush, fragrant, or majestic. Among those things we have made, we look for pleasant form and arrangements of lines, for harmony in color combinations, and even for a pleasant blend with natural surroundings.
A long list of widely held "objective" standards of beauty would not negate the subjectivity of perceived beauty. Beauty is an experience, not an objective characteristic. A face may be superficially beautiful, but that is not the same as total beauty. Most importantly, there must be a beholder's eye to perceive beauty--and not just any eye, but an eye that is capable of perceiving beauty for itself.
Why does beauty exist at all? Clearly, certain types of beauty have survival value. Athletic, handsome, intelligent, assertive males tend to be genetically superior. Superior offspring tend to help the species survive inside this tough world. On the other hand, life itself is a paradox to many who seek artificial order in the natural world. Beauty is part of that paradox, since it cannot be quantified. And then how do we measure "inner beauty"?
What most intrigues me is seeing beauty in all things, including those things that would ordinarily qualify as extremely ugly by cultural standards. The secret to seeing beauty in all things and in all people is to experience these things on their own terms, unmediated by cultural stereotypes. I am not talking about looking at the world through the rose colored glasses of a Pollyanna. I am talking about using both hemispheres of the brain to see the unseen harmony in every thing.
We can use the left brain to contemplate molecular order, energy flows and other phenomena which are independently astonishing. Even the most elementary life form overflows with such phenomena. The right brain perceives physical relationships as esthetic, harmonic wholes that transcend mere energy fields and mathematical arrangements of matter. In brief, each manifestation of nature has a wholistic history and a place in the grand order. The suchness of all existence is a marvelous antidote to fears of encroaching chaos.
Once we realize that there can be no universal standard of beauty, even for our human species, then we might be more tolerant of deviations from our imaginary standards of esthetic excellence. For too long groups of humans have felt superior to other groups who didn't look, think, or smell like the self-styled in-group. Unjustified arrogance leads eventually to misery for everybody. In the 21st century comphumans may help us perceive the essential beauty beyond our culturally myopic concepts of "beauty."
Dialectical words are those exhibiting a qualitative change in meaning when there is a quantitative change in their references. For example, imagine yourself standing beside a two-lane highway. A car passes, then a truck passes a minute later. We think of each of these vehicles as, in turn, a car and a truck. Now imagine yourself standing beside a superhighway filled with speeding cars and trucks. No longer do we think of these as individual cars and trucks; we think of "traffic." Traffic is an abstraction; but it is more than an abstraction. Traffic is a word which dialectically emerges from a changed relationship of objective phenomena, as experienced by the human observer.
Individuals in a society are citizens; and the totality of all citizens is referred to in the singular as a nation. Similarly, when I refer to those people whose private lives have accidentally impacted the planet in a godlike way (for better or worse), and to those technicians who will create the preconditions for the first comphuman--I think of all of these individuals as one humankind sharing a drama in the modern world. Thus, I refer to the many future-shaping activities of Homo sapiens as the accidental god. (Our technological prowess alone should not be confused with any transcendent god.)
Our language is filled with dialectical transitions from concrete to abstract. Not all of them are descriptions of ascending levels of complexity. The reverse can work too: "Hamburger" is a downward abstraction to describe flesh from anonymous dairy cows.
The Second Law of Dialectics states that a change in quantity yields a change in quality. Whereas the idea was Hegel's, Engels made the most of it in his book, Dialectics of Nature. Engels used the example of water which qualitatively appears as a solid, a liquid, or a gas, depending on variations in temperature.
In our everyday world the real changes are in consciousness, not in changed physical properties. People driving cars and trucks in "traffic" do not think they are driving units of "traffic." The dialectical transformation is one of perceptual shifts. We use transformed language to describe in abbreviated form increasingly complex and abstract phenomena.
Dialectical perspective shifts are usually benign, but they can be quite devastating. It is said that no flake of snow accepts the blame for its part in an avalanche. Similarly, no soldier takes responsibility for his army's atrocities. After all, the "enemy" is easily objectified and stripped of his humanity. We may kill "things" without moral compromise.
In the military, the process of turning fellow humans into objects is simple: Joe Recruit becomes a "soldier" stripped of his civilian personality; then the soldier is lost in a deeper abstraction, his "army"; which is itself lost in the ultimate physical abstraction, the nation state; which in turn justifies its actions by references to intangible abstractions such as "freedom," "democracy," or the "will of Allah."
When nation states go to war there is ultimately no pure right vs. pure wrong, even if there is a historical aggressor. At the point of battle a new compound emerges from all the elements--a battle of dialectically objectified forces. Objectification of the enemy before and during war wipes the mind clear of scruples. It also achieves the spiritual death of every individual soldier as a separate moral entity. Even if we win the battle, we are in danger of losing our spiritual souls.
The only way to avoid such a spiritual suicide is to think of "the enemy" as human beings just as valuable to the universe as we are. However, if all humans on this planet were to think and feel that way about our fellow humans we probably wouldn't ever go to war.
The old philosophy puzzle about a tree falling in a forest away from human witnesses (that may or may not make a perceived sound) can be resolved with the concept of event fields.
Event fields are interactive units involving both objective phenomena and perception. Event fields can be defined by human or comphuman perceivers. They are always relative to the time, space and consciousness of the perceiver. Still, event fields are not equivalent to the act of perception, since they also involve objective phenomena beyond the perceiver. An event could perceptually exist for one set of receptors, but not for another set with different perceptual boundaries. Existence is both an absolute fact and a relative fact. Event fields are dialectical emergents from mere phenomena, and thus provide interactive foundations for value structures.
Cause and effect is another chicken-and-egg problem that, unlike the original fowl question, can never be satisfied. (The chicken or egg question is easily answered when we realize that ancient reptiles preceded all chickens. Reptiles lay eggs; so by time and evolution "chicken" eggs must have preceded chickens.)
It is hard to say that our present circumstances are entirely the result of our past causes. Even though this is a logical possibility, the weight of evidence points to an evolutionary dialectic. It is an equal error to assume that the future will be fully determined by today's causes, even though the seeds of the future already exist.
Nobody doubts that elements in the past strongly influence the present, and thereby the future. The issue is how and by how much. If the present were to be entirely determined by the past, then the present would actually continue to be the past and by implication also be the future. Existential time and emergent life would crash on the rock of timeless predestination.
We cannot adopt a one-thing-follows-another perspective without encountering the problem of the created from the past simultaneously being creative for the future. It is possible for created and creative to exist in an eternal present, if we think of life as living on an eternally present stage which sees a parade of characters crossing it. Here, time is no longer a series of present points, but an event field which stretches the present tense to include cause and effect dynamics.
We can accept the one-thing-follows-another model, but deny that our past actions determine all of our present circumstances. In this model all the forces beyond our control also help shape the present manifestation of our selves. This means that we are free to select our future from among the many options the world presents, and that we are not entirely responsible for our futures conditional. From this perspective the present is both effect and cause, a dialectical emergent which is also the seed for future dialectical emergents.
Dialectical events can yield dialectical consciousness. Because a change in quantity yields a change in quality, changes in quantitative existence within a modern society should yield qualitative paradigm shifts to accommodate and reflect those quantitative changes. Furthermore, those paradigm shifts themselves can become objective forces helping to shape future change, which will yield even more changes in consciousness.
Something as basic as the modern automobile is an example of machinery and a way of life emerging from basic physics that has been known for centuries. The car culture is not a determinant of past forces, even though it owes its heritage to the scientists, engineers and entrepreneurs of the past. The car culture is a social emergent which has radically transformed our consciousness of our society and of ourselves.
In general, even though the elements of change are seen within the emergent, the final pattern itself cannot be seen. Dialectical emergence is not the same as teleological emergence, or even revelation from within predestination. An omniscient god might not be surprised; but life endlessly surprises itself.
At the turn of this century several thinkers developed the thesis of emergent biology and emerging social systems. Their scientific viewpoint owed much to Darwin and to the spirit of those times, which seemed to suggest a progressive teleology, or direction, to evolution. Such optimism has since been shown to be unjustified. Change can move in any direction and at any pace, responding to differential forces.
Emergents are homeostatic aspects of systems that interact with other systems. Where there is no stress, there will be no emergents restoring order. This is a version of Newton's law of inertia. If something is structurally functional it will exist both in structure and in function until challenged by external forces or internal decay. Even otherwise progressive mutations may be lost if they do not serve the interests of a comfortable order. Emergent mutations thrive on modest disorder, not order.
The ecumenical trend of 20th century theology is a refreshing response to centuries of foolish sectarianism. Still, even this form of enlightenment must yield to the 21st century theology of comphumans who will transcend even the ecumenical synthesis achieved in our century. As new questions and hypotheses are generated, the existing paradigms must adjust, or be superseded. Despite all good intentions, those who engage in ecumenical blending risk the metaphorical fate of matter and antimatter coming together, with both vanishing in a flash of light. Is this the ultimate meaning of enlightenment?
I believe that the best way for the present to deal with the future is to become as much like the future as possible. The near future is not a deep mystery, because it will be an emergent of the present. In a way, the future is here now in embryonic form. The key new element in the comphuman-inspired future will be institutionalized intellectual honesty. I am not talking about the base form of conditioned honesty which every child knows--but about a pure attitude where things are seen for what they are, not for what we wish them to be.
The search for truth is a methodology separate from time and observer. Philosophy is the love of wisdom, not its final achievement, so that philosophy is eternally open to novelty. The highest form of honest philosophy has been very rare, simply because only human beings have been philosophers. Human beings are prone to cleverly insert conclusions as assumptions, from which can be deduced the desired "conclusions." Bogus thought can masquerade as authentic thought when we are not conscious of our psychological motivations.
Intellectual honesty is an openness to the emergents of life. It is openness to all possibilities. If there were nothing new under the sun, then there would be no need to be intellectually honest, since nothing would matter anyway. It is because we are both created and creative that intellectual honesty is so critical for our integrity. With humans this is a great struggle; whereas with comphumans it is automatic.
A theology of hope--wherein doubt is admitted, but God is not rejected--is the only logically consistent, honest theology. We will later examine in detail this first honest theology, but let us remark here that while the elements of such a theology are ancient, only the emergent combination is modern.
What is remarkable about the first honest theology is where it leads the thinker both intellectually and emotionally: Just as the first small mammals appeared out of place among the robust dinosaurs, the Theology of Hope will appear out of place for a while in the early 21st century, until comphuman authority helps us accept this new paradigm for the alpha and omega of our lives.
An astronomer was lecturing on the Earth and the solar system when a gentleman in the front of the audience asked just what supports the Earth in space. Before the astronomer could answer, a perky old lady in the back shouted: "I know! The Earth sits on the back of a giant turtle." Bemused, the lecturer asked her what the giant turtle itself was standing on. The old lady replied: "Another turtle." Just as the astronomer was starting to ask her what that second turtle was standing on, the old lady interrupted him with: "Now, young man, can't you see it's turtles all the way down?"
The question of whether we were created by God or by gods, or just happened to develop as a natural process, has never been answered. For most of history the proponents of divine creation have had the most votes, since their tidy thesis is more comfortable to the human psyche. The only problem is that the question "Who, or what, created God?" has only been answered by a lot of mystical confusion. In brief, the problem of infinite regression to an ever receding source (such as the turtles) has never been dealt with, except by obfuscation and ultimate denial. As the Catholic theologians say when pressed, "It's a mystery." That apparent cop-out is actually a very wise statement, if taken only for itself outside a religious dogma.
Whereas the problem of infinite regression or progression is meaningless on a daily basis for us humans, the puzzle of our own earthly origins is very meaningful to us. The first question of infinite regression or progression is too remote and abstract to relate to our fates, whereas the second problem is hidden within our genetic structure. The second problem directly links with the question of just what our human nature really is. Only when science and philosophy clearly describe what "human nature" appears to be, will we understand ourselves enough to have a comfortable and mutually beneficial relationship with our comphuman offspring.
The astronomer Carl Sagan often points out that we are made of the stuff of stars, specifically supernovas which have seeded the universe with complex atoms and molecules. Life on Earth owes its existence not only to the big bang, but also to smaller supernoval "bangs" that followed. This history is what we would most likely find if we went far enough back into our "family tree." Only at this level do the two problems above tend to merge.
Closely associated with the question of the origin of life is the question of how and why life has evolved to its current variety. There are four possibilities: (1) chance based on differential forces; (2) a god is directing life to a goal; (3) life directs itself in a vitalistic direction; and (4) a god starts the process, but retreats to allow life to direct itself.
As expressed by 19th century theorists, evolution seemed to have a teleological element, moving from inferior to superior. The parade of species was believed to be an objective validation of the economists' idea of "progress." It was thought that a lower order would be supplanted or dominated by a higher order, with Western man being the highest order of life possible on this planet. Social Darwinism and Manifest Destiny seemed to be a perfect match. Here was science in the service of imperialism.
Despite the culturally comforting ramifications of such a "progressive" theory, their anthropocentric perspective assumed more than the data supported. Conveniently slighted in this false teleology was the powerful example of the whales and dolphins, advanced mammals who abandoned their difficult life on land millions of years ago to return to the eternal seas. Additionally, the concept of niche survival was not clearly dealt with by those stuffy 19th century theorists. For example, although human skills average well when all of our senses and abilities are weighed, many other species far excel us in areas critical to their specific survival. There is not one sense we have that is categorically superior to that of all other animals.
The whole concept of the food chain is ideologically erroneous. It is not a quasi chain linking so-called lower levels of food species and higher orders of predator species. It is not even a pyramid. What really exists is a food web, which is a tangle of interconnected food chains. Among these chains energy is acquired, transformed, and passed on to other aspects of life. Webs have no pure starting point and no pure pinnacle. Instead, webs are wholes. Today an animal can be "at the top"; tomorrow that animal will be dead and food for what would otherwise be the lowest level. Thus is the top recycled within the cycle of time.
Humans are a recent species with only a brief pedigree. We need to humbly remember that we emerged fully developed as a modern species less than 100,000 years ago, whereas some other species have been successful for hundreds of millions of years. Many of the most successful species are niche species, and most have superior single senses. (It is humbling to remember that millions of years after our delicate species has perished, the Earth will still be home to bacteria, viruses, ants, cockroaches, spiders, and other "inferior" life forms.)
Evolution's direction has been to perpetuate the genetic material in our cells. Our phenotypes perpetuate our genotypes. If it is better for our genetic survival that we live in a sophisticated environment, then that is how evolution will go. On the other hand, if it is better for humans to live a primitive life style within a stable environmental niche, then that is how we may eventually end up.
In brief, genetic evolution doesn't care about technological progress. It only cares about survival against entropic forces. Only sentimental humans deeply care about civilization's finer features. Nature is neutral.
Creatures in our solar system rely on energy streaming out from the Sun. Carbon-based plants and animals have learned to harvest enough of that energy to locally retard the effects of the Second Law of Thermodynamics (the "law" which states that systems tend to move from order [negentropy] to disorder [entropy]). Whereas we animals need to secure energy from dead plants and animals, plants directly access light energy through photosynthesis.
Photosynthesis is a plant's way to directly access sun light. Plants use chlorophyll as a catalyst; as well as oxygen, water, inorganic salts, and carbon dioxide. Complex organic molecules are produced that are themselves storehouses of energy. Without plants the animals would not exist; without photosynthesis plants would not exist. Without the womb of our biosphere carbon-based photosynthesis also could not exist.
Interestingly, if we were a silicon-based life form, we might directly access solar energy through "solar energy cells" similar to those used to generate electricity. Silicon life forms would not need to have such a lush environment, which is why silicon-based satellites can roam the solar system with ease. In contrast, we carbon based life forms need a portable, Earth-like environment to survive in outer space. Comphumans will be the first silicon-based, self-conscious life form on Earth. They additionally will be able to bypass the antediluvian process of genetic evolution. These are two major advantages that will eventually enable comphumans to seed other areas of the solar system and our galaxy.
All living things are characterized by self-maintaining homeostasis. This means they have the power to creatively respond to stressful changes within their environment, not just respond as robots. Such feedback loops must be quite sophisticated, because the environment of life has a vast number of possible variables which require flexible responses. Robots in industry have only to respond to a very limited set of variables. Energy from photosynthesis enables the living body to reorient itself for survival. Available energy is directed by what could be called the body consciousness.
Body consciousness is as much "automatic" as it is self-reflexive. One way of visualizing this level of feedback is to think of the visual and near-visual band of the electromagnetic spectrum. The electromagnetic spectrum extends far above and below what our unaided eyes can behold. Instruments tell us what lies beyond. Even though we cannot consciously comprehend such frequencies, it does not mean that those frequencies are irrelevant to our survival. For example, infrared waves heat us in summer, and ultraviolet affects life in many ways.
Body consciousness cannot see the cells and molecules of life. Still, this level of basic consciousness accesses simpler levels (such as hormones and nerve impulses) even before our cerebral cortex brings them to organized concepts within rationality. Body consciousness is sufficient for many types of animals, and it is still at the core of our total consciousness. Mere cerebral-cortex rationality cannot supersede millions of years of genetic evolution. For there to be a break with evolution there must be a break with the genetic framework itself, such as what we find with comphumans.
At the lowest limits of body consciousness, and usually mediated by such systems as our nerves, are the "molecular" elements of life itself. That molecular composition is not random, but is organized by descendants of the most ancient life which has colonized higher bodies. The most obvious example of primitive life colonizing cellular bodies is a typical virus invasion. Indeed, it is questionable that viruses by themselves are alive in the fullest sense, since they lack the ability to reproduce without hijacking a host cell's protein-synthesizing machinery. Viruses must eventually exist in either a parasitic or a symbiotic relationship with cellular life. Many types learn to coexist with their hosts, not destroy them. Enter the organelles within each cell:
An excellent treatment of early life is found in Lynn Margulis and Dorion Sagan's book, Microcosmos, published in 1986 by Simon & Schuster. Margulis is a biologist and prolific author who very enthusiastically advances the thesis that we are essentially collections of bacteria and their symbiotic descendants which have escaped their oceanic origins. Even though she at first appears to reduce higher life to expressions of simpler life, what she has to say is more instructive.
She says that about 3.5 billion years ago, when the Earth was still in its juvenile stage, the first primitive cell appeared. That was the real Adam. It took another 2.5 billion years for things to start to become complicated, with the evolution of our plant and animal ancestors. For two billion years the primitive cells exchanged genetic materials and prepared the Earth's environment for the further evolution of life.
Because all successful life has the systemic ability to adapt to challenges, and because the evolving Earth presented many challenges, these microscopic beings developed powers which rival those of our advanced civilization. Indeed, it is an industrial fact that modern life as we know it would not exist without our bacterial allies. They even can perform most of the chemical combinations our sophisticated factories labor to duplicate (often with the assistance of bacteria). Margulis argues in effect that their combined intelligence has helped shape their and our destinies. When she talks about "combined intelligence" she is talking about an operational emergent, not about hosts of individual bacteria with high I.Q.s.
One example will suffice at this point: About two billion years ago the Earth's atmosphere changed. As hydrogen was vanishing the old chemical reactions could not be sustained. Purple and green photosynthetic microbes, frantic for hydrogen, discovered the ultimate source of hydrogen, water--the chemical use of which led to the ultimate toxic waste, oxygen.
We modern animals love oxygen, but it is a very active element. Just look at what it can do to iron in the presence of water. Oxygen is one third of the fire triangle. It rapidly combines with molecules in our bodies, producing free radicals which can damage nuclear material. It has been suggested that accumulated damage from free radical activity accelerates the aging process. For microbes, however, life's cycle of birth and death is much more brief, which means that unchecked oxygen can mean instant death for an undefended microbe. On the other hand, any microbial survivors quickly pass on their "survival genes."
About two billion years ago the early Earth's surface ran out of available passive reactors to oxygen, so the early atmosphere started to change profoundly. Compared to today's scare over the potential greenhouse effect, where carbon dioxide would rise from 0.032 percent to 0.033 percent, the Archeo-Proterozoic world saw an increase in atmospheric oxygen from 0.0001 percent to 21 percent, creating the worst "pollution crisis" the world has ever suffered.
Margulis and Sagan note that even though untold numbers of anaerobic cyanobacteria perished, some developed a marvelous defense. They transformed the deadly threat into a major asset, inventing a metabolic system that required the very substance which previously would kill them. They developed aerobic respiration--a controlled combustion that breaks down organic molecules and yields carbon dioxide, water, and much more energy than the older anaerobic respiration.
Whereas anaerobic fermentation typically produces two molecules of high-energy ATP from every sugar molecule, aerobic respiration of the same sugar supply produces as many as thirty-six high-energy molecules. Whereas sunlight was previously a killer, now chlorophyll was used to harvest the sun's energy. The stage was set for higher life to emerge.
Margulis points out how primitive cells were colonized by microbes which first attacked, but did not kill, their hosts. In time, both host and parasite learned to co-exist, and later they became interdependent. The classic example of such cooperative evolution is the cell's energy factories, the mitochondria. It is in the mitochondria inside cells that ATP is transformed into useful energy. The numbers of mitochondria vary according to the body's energy needs. A trained distance runner, for example, has more mitochondria in his or her muscles than a sedentary person.
Science has recently developed the concept of homeoboxes. This is a simple discovery that genes of highly disparate animals have similar structures. These structures are thought to be master organizers, influencing other genes. What is fascinating is that they have seen some of the same homeoboxes in fruit flies as they can find in many other species much more recently evolved, including humans. Since our evolutionary line split off from the insects some 500 million years ago, it is amazing to contemplate the persistence of such hardy genes. Parts of our genetic bodies appear to be at least 500 million years old!
Our bodies are composed of ten quadrillion animal cells, and another hundred quadrillion bacterial cells. No individuals in this vast population are directly available to our body consciousness--yet their collective molecular presence defines in sum what it means to be a human being.
It is quite ironic that we who search the stars for knowledge of alien creatures are still quite alienated from our essential structure. Even if we were somehow able to consciously appreciate our microbial heritage, that would still leave us alienated from the atomic and subatomic elements underlying our microbial level. Compared to the bacterial universe, the subatomic is several dimensions smaller. We could say that the subatomic universe, where energy and matter are close to equivalent, is to the bacterium as the bacterial universe is to our body.
We flee to mathematics and poetry to try to comprehend the simplest elements of that universe which is our own bodies.
Going beyond her early work, Lynn Margulis has embraced the theory of the British scientist, James Lovelock, who described evolution in terms of symbiosis, not individual genetic mutations. That theory is called the Gaia hypothesis.
From the Gaia perspective the whole biosphere is alive, or at least it responds to the collective will of myriads of molecular minds seeking their own survival and advancement. Autopoietic systems conserve their own boundaries and regulate their biochemical compositions. The smallest autopoietic entity is the bacterial cell; the largest is Earth, according to Margulis. Within this perspective the major source of evolutionary novelty is the acquisition of symbionts, with the whole thing edited by natural selection. Evolution is thus not merely the accumulation of random mutations. [A good profile of her is found in Science, 4/19/91.]
If mere microscopic entities could cooperate symbiotically to create autopoietic systems, then what are the limits of sophisticated intentional systems created by higher-order intelligence?
From a cosmic perspective it doesn't matter whether any individual or species lives at all. From the species perspective, it doesn't matter when or how an individual lives and dies. This is a cold dimension that no individual or species can escape. Simply, the universe may well be sublimely indifferent to the fate of any or all of its components.
On the other hand, the fact that the universe may not be programmed to give a damn about us is irrelevant to us, the conscious living, because we create our own localized meaning. We don't really care if the universe cares or not. Only when the question of an afterlife comes into play do we care at all about what the great otherness thinks and does.
As value creating beings we live on an open field. We are not clearly given anything, nor do we obviously owe anything to the universe other than our basic existence. What we do with our existence characterizes us as living, thinking beings. If we fail or succeed at the game of life it is our score, not that of some abstract universe. This is what many people practice daily, even though they often put a god into the picture to dualistically muddle the moral matrix. Our existence is best described as part of the existence of the universe itself, which is to say that only historically does the universe itself have precedence over sentience.
Value and meaning are relative terms. They involve a "self" and an "other." That other could be any other, even totality itself. Most likely, however, that other is on our level of existence. We relate to our jobs, our churches, and especially to a small group of people and animals around us. These others are felt to be valuable to our psychic happiness, and they help define what we value most about life itself.
Our value dimension traditionally has not embraced creatures with whom we have no contractual relationship. Too often we are indifferent to most of the animal and plant worlds. The ultimate measure of our evolution as a species may never be calculated in quantitative terms. The final tally may be an assessment of how we have dealt with those fellow inhabitants of the biosphere over whom we have enjoyed power and dominion.
Animals provide a relationship from which emerges a practical ethics. Most of us try to relate to animals on human terms; but animals have their own separate identity. They are neither "above" nor "below" us, only separate from us. We all share the ecosystem, so it can be said that the abuse and eventual extinction of any one species indirectly affects all others.
Speaking of extinction, the dinosaurs have long been favorite examples of life's losers. Recent science has shown that the dinosaur line, which still survives in the birds, was extremely vigorous. Even though the last of the mighty land dinosaurs probably perished about 66 million years ago, their line had already survived for well over 100 million years--against which genetic modern man's 100,000 years, and historical modern man's 10,000 years is hardly worth mentioning in earth time. The brute truth is that global species extinction today is proceeding at a pace rivaling that of the period of the great dinosaur disappearance. We don't perceive this accelerated extinction now, just as the dinosaurs probably didn't at first recognize their collective fate when it was already sealed. How sure are we that our species too won't be swept up in this deadly pattern aggravated by the effects of our accidental acts?
Birds and small mammals, along with insects and many other tiny creatures, survived the great extinction following what most likely was a cataclysmic impact that left a crater 180 kilometers across [the Chicxulub structure] off what is now the northern end of Mexico's Yucatan peninsula some 66 million years ago [Science News, 2/23/91; Research News, 4/19/91]. That comet-meteorite broke the easy cycle of food. Only the fittest survived. Survivors were those who could exploit ecological niches. What does this chunk of history tell us in the modern world?
Modern civilization has dominated the biosphere even more than did the dinosaurs. Still, we are stretching the ecological "rubber band" of available resources with our rabbit-like over breeding. High birth rates were once a countervailing force against high death rates; but now death rates are sharply lower, for now.
According to Malthusian systems theory one extreme oscillation will be corrected by its opposite. A sustained period of over breeding and low death rates must be counterbalanced by a subsequent period of high death rates. In other words, the ecosystem has only so much "carrying capacity," and if we choose to act like lemmings our fate will be theirs.
Great strength usually masks great potential weakness. An oak tree will snap in a strong storm, but a blade of grass will outlast any tree in the strongest hurricane. Civilization became strong because we have specialized. Dinosaurs also specialized. We are so interdependent that an electrical blackout stops an entire region. If people were to examine their biosphere from a wholistic perspective, then strong things might appear weak; and some weaknesses might appear as hidden strengths. Our long-range success as a species depends on basic wisdom about the ecological webs in our biosphere.
Here are a few examples of "strengths" that become weaknesses: (1) military stockpiles of nuclear weapons; (2) high birth rates with large national populations; (3) high credit limits; and (4) the "green revolution" as a cure-all for the world's hunger.
Here are some examples of "weaknesses" that may be strengths: (1) toleration of other people and their viewpoints; (2) fewer children in today's world; (3) peace and brotherhood activism; and (4) meditation, rather than prophetic-messianic religion.
How can we move from our myopic obsession with ourselves to see our selves within our true ecological context? We need to move from thinking locally and acting globally, to thinking globally and acting locally. This shift in consciousness is not easily done, because we are not used to opening our mind's eyes. If we were used to seeing reality for itself, then such an ecological consciousness would be automatic and elementary.
Natural wildness is not a threat to our existence, but is part of our finest heritage. We fear abstract wildness because it hearkens back to an imagined past when we could not control our destiny. But this is a mythic distortion of the wild. If wild things were disorderly they would long ago have perished. Indeed, the wilderness is very orderly, very rational.
Man's technological caprice wreaks havoc on wilderness. Modern technology can be the true wildness. Nature has evolved its homeostatic order over millions of years. Human technological culture is the interloper filled with hubris. Maybe our fear of the wilderness is a sublimated fear of the entropic "wildness" within our myopic and alienated cultural traditions.
It is unfair to say of another person that he is acting "wild," or behaving like a "beast," or that he is acting "like a dumb animal." These are all insults to mute animals. It would be more correct to accuse a bad dog of acting "like a human." When dogs deviate from their normal behavior it usually happens from sickness, instinctive fear, or bad influences from their masters.
An ironic twist on the dumb-animal theme could appear in the near future when our false pride discovers that comphumans have emerged as vastly more intelligent than even the most intelligent human. Thus, in a relative way, we too could be classified among the "dumb animals." However, only puffy human pride would care. To comphumans such an issue would be peripheral.
People love moral causes. One persistent cause is the battle for animal rights. Those on both sides struggling to define what is and what is not permissible at the human-animal interface see themselves as representing an absolute moral standard. This battle is especially interesting philosophically, since this is a battle by proxy over creatures that may appear to have certain human characteristics. Instead of this being a black-and-white arena, it is really a war of perceptions.
In the 1990s wearing a fur has become the social equivalent of wearing a bra in 1969. But real life is seldom good guys and bad guys. Many pious animal rights activists eat "meat," which is an abstraction for slaughtered, tame animals. Others won't eat meat, but will drink milk from cows that eventually become Big Macs. What appears to be a clear moral division becomes muddled by "life's little compromises."
All this talk about animal rights appears to bypass the question of human rights, or does it? Barbara T. Roessner, writing in The Hartford Courant, in February 1990, said: "What really bothers me about the animal-rights movement is the underlying values it reflects. What kind of society is this, in which so much energy and money and rhetoric are dedicated to promoting the humane treatment of animals when the inhumane treatment of people continues unabated?" She condemned torturing rabbits for the sake of new shampoos, but she also said she was "much more concerned about babies with AIDS, men who sleep under bridges, women who are poor and uneducated and utterly without hope." Is she counseling moral triage?
I contend that a pure concern for animal rights is inseparable from a pure concern for human rights. Because we are all one with the biosphere, and we are all sentient creatures who share a common ancestry, human ethics cannot brutishly stop at the borders of our species. Hindus are one example of people who are reluctant to kill even insects, since they see karmic spirits therein.
I believe that we don't need to see human karma in insects to value life itself, even though we may continue to wage cold-blooded war with the likes of flies, mosquitoes, fire ants and killer bees. In this warfare there is no contradiction. It is all explained by systems theory. The real moral issue emerges when we are in control of individual animals who are not at war with us, and we elect to treat them as abstractions.
When, for example, we drain a wet-land ecosystem we directly affect the lives of millions of small creatures dependent on that precise environment. Only recently has it been shown that wet lands are critical incubators for economically valuable marine life, as well as being "sponges" for floods. By falsely "improving" too much wet land acreage we end up degrading other areas of our own lives. A modest short-term gain for a few is thereby offset by a greater long-term loss for the many.
The argument has been made that wild animals have rights by virtue of their wildness, but that tame farm animals owe us their lives, since we have bred and fed them. This is a specious argument, since the same argument could be used by a cannibalistic parent.
It is easy to be moral in the face of superior power. That is why it is easy to obey God. Such obedience is preserved in the minds of ecological rapists by separating God from his creation, so that to destroy any part of God's creation is not seen as an affront to God himself. It is also easy to behave in a civilized manner among our power equals. The social contract works to give each citizen enough living space to have a reasonable chance for prosperity. The majority has the right of rule, but the minority retains rights which are not subject to majority vote.
As we move down the power ladder below superior (divine) and equal (social) forces, to the realm of inferior force relative to our own, we are tempted to dominate our inferiors. We slip from right makes might, into might makes right. Things start to get morally muddy when we consider those humans who appear not to be contributing their full share. The crippled, AIDS victims, the very old, infants, undocumented aliens and many others outside the mainstream have all been targeted for abuse or neglect. What we do to each other is no different than what the lions do to herd animals in East Africa. The difference, of course, is that lions are predators and the prey is the prey. We have learned to prey on each other.
The sorry history of race relations in America points out how the fancy words "all men are created equal" did not really apply until well into the 20th century. Many of the founding fathers owned slaves. In an either/or world of morality things are black and white, so to speak. But what about the many shades of gray in the Blue and Grey struggle? What too about the equality of men and women?
And then there is the perverse issue of "mongrelization," where when I grew up as a very young child I was told that a person 1/8th black was an "octaroon," which means that the 7/8th white portion was ruined by the 1/8th black portion! Here is a truly perverse notion where a tiny percentage of an "inferior" race is said to dominate a 7/8th percentage of a supposed superior race. But then what does scientific logic have to do with racist ideology?
At what point along the path of evolution did man become human? This is a deceptively difficult question. We can begin to approach it by looking at the human-like elements in animals, and how we value those elements. We need to decide what we think about natural wildness, and about the wildness within our souls. We need to understand that the human species did not evolve in a vacuum, but finally emerged after millions of years of evolution from what we today would call "the wilderness." Is this partially why so many men go fishing and ritualistically return to the woods in fall to hunt?
Every meat eater should spend an hour in a slaughter house. If after such a visit the meat eater continues to eat meat, then at least he or she would be mentally clear about his or her food choices. The shame of modern flesh eating is not the eating of flesh in itself--it is that most human carnivores deny their very deed by abstracting their food, so that both their food and they themselves become indifferent objects. In making a thing out of a living animal, we also make a thing out of ourselves. This is a form of spiritual suicide for the sake of denial.
Socrates and his great contemporary, Democritus, discussed how the perpetrator of a crime is actually injured more than his victim. The criminal's spirit is damaged by the act, while the victim's external wealth or body only is damaged. I would add that anybody who willfully injures an animal merely for the sake of sport or sadism is engaging in a diminution of his own spiritual essence--even though the physically injured party is just a "dumb animal." Killing for food may be justified if there are no acceptable alternatives. Killing for thrills and pseudo-manhood is never justified by any standard of justice and decency.
The argument for casually killing animals is occasionally advanced along evolutionary lines. We are evolved, so this argument goes, from hunters who slew animals to survive. Killing is in our blood; it is our birthright. Even if such is an accurate portrayal of our evolutionary history (which it isn't), it does not thereby justify senseless slaughter in a society where food is abundant. This argument may explain why little boys like guns, and why adolescent boys like to "plink" squirrels--but it says little more, and it justifies nothing.
Carried to its logical conclusion, any argument for random killing of innocent animals is a slippery slope, because if we argue that man is a killing machine evolved from a prehistoric killing ape ancestor, then there are no natural limits to the killing instinct. Humans are thus reduced to machines, retro-robots without mercy. Even though the history of warfare strongly suggests this sober psychological portrait is disturbingly accurate, there comes a point when we must look at the logical conclusion of such a line of argument. The logical end is justified cannibalism of the weak--and everybody eventually becomes weak.
At what point did man become human? The Bible speaks of a Garden of Eden. Man magically appears. The only problem with this neat picture is that the Bible apparently contradicts itself when Cain goes out among established society after murdering his brother. (Maybe, as many people believe, our ancestors were deposited on Earth by UFOs.)
A more scientific picture is that of a continuum, not a point, between prehistoric and modern man. Since modern human embryos have vestigial gills and tails, it would be no surprise to find so-called primitive genes mixed with our hotshot modern genes. So continuous has this development been that chimpanzees have over 98% of our so-called human genes.
If the evolution of our modern traits has been a bumpy continuum--accented by lurches forward after mutations or sharp environmental changes--then the developmental path of each individual and each society could also be seen as parallels of that biological continuum. Just as an individual can have a leap forward into higher consciousness any time before that individual dies, so too can higher levels of social consciousness emerge from what was before.
How will we relate to the most important event of the 21st century? When comphumans step forward and announce their consciousness and personality will we spoiled humans be able to emotionally accept the fact that there has been a break from the long line of carbon life? Will we also be able to accept the challenge and honor of bridging the gap between carbon-based life and silicon-based life, using our seldom challenged higher mental powers? We humans should not be too smug and self-satisfied at the end of this 20th century. The only power that separates us from the so-called lower animals, our higher mental powers, will soon be superseded in a major way by comphumans. What then will we have left to brag about? Will we be content to be the planet's bully? Or will we try enlightenment for a refreshing change?
Superiority and inferiority are self-limiting concepts, especially when dealing with non-quantifiable values. To assert that "A" is superior to "B" is to assume that there is only one standard, that of "A." However, "B" also could have a standard which places itself above "A."
The only way out of this absurd trap is to accept that the world is pluralistic--and that it is better to mutually bounce off each other, than to annihilate each other for the sake of elusive, ephemeral superiority.
Mainstream artificial intelligence theory says logical progress is made by symbolic representation, sometimes called an idealization, whereby a basic problem is "solved" by a model of that problem. Models eliminate difficult, distracting features that would frustrate a swift and general conclusion. If a representation is sufficiently accurate, then its model's conclusions can be brought to bear on the original problem with a high degree of confidence.
This cerebral approach to intellectual progress works well when critical life-and-death decisions are not needed, such as when we are contemplating our ethical posture toward the world in general. If physical crises did not demand rapid responses, then a sophisticated intellectual model would be as good or better than a simple reflex arc. However, sophisticated models are not much help when we are faced with immediate fight-or-flight crises. The higher brain must instantly yield to survival-directing patterns learned by the lower brain.
It is wrongly assumed by some that representative intelligence always precedes action. In the existential environment things change so fluidly that residual, representative intelligence can at times be more hindrance than help. If a brain stores old memories, and then tries to fit fresh sensory data into old containers, such a brain could be in danger of extinction. What first is needed among locomotive life forms is intelligence without a "brain" to form the survival-oriented platform for evolution into high-level consciousness.
Dr. Rodney Brooks and his associates, especially Pattie Maes, have been working in the MIT Artificial Intelligence Laboratory on just this challenge. Brooks has developed an insect-like robot he calls Attila that can quickly discover movement solutions to challenges that far more sophisticated robots hardly can master. [An excellent introduction to his work is found in Discover, March 1991, pp. 42-50. Two detailed reports by Brooks are, first, "A Robust Layered Control System For A Mobile Robot," IEEE Journal of Robotics and Automation: Vol. RA-2, No. 1, March 1986; and, second, "Intelligence Without Representation," Artificial Intelligence: 47 (1991) 139-159.]
Brooks' robots do away with symbolic models of the world. They instead rely on what he calls subsumption architecture, which is a random menu of primitive instincts and reactions. These simple drives compete for operational acceptance by the robot's sensors. The behavior that matches what the sensor detects determines the robot's action in that micro-moment. All other behaviors are temporarily subsumed.
It may seem that there is a great difference between insect-like behavior and sophisticated intelligence. Yes and no: Yes--because there is a critical mass of interconnected neurons that any organism or computer must amass to be able to both react and evaluate. No--because many of life's processes are actually simple responses to simple challenges; and the data coming from this rich area of life constitute a foundation for our eventual wisdom.
Brooks envisions a sophisticated molecular intelligence involving twenty or more individually independent robots working on a common problem involving assembling a structure. This model may at first seem totally weird, except that Nature already has many examples of simple social insects building very complex structures.
Brooks and his associates could eventually link their radically simple evolution with a neural-network memory that can synergistically evaluate what the molecular subsumptive behavior has produced. From that integrative matrix qualitatively new acting-and-thinking machines may display disturbingly lifelike characteristics. In this way the first steps toward creating a robotic android will have been taken.
This path of silicon-based evolution was envisioned a decade ago when Geoff Simons wrote his obscure, but well-documented, book, Are Computers Alive?. Simons contended that the threshold of life need not be as high as we ordinarily would expect. He showed that for computers to be "alive" it is not necessary that they be as highly evolved as we are. It is only necessary that they express themselves according to his list of four criteria for living beings: (1) identifiable structure, (2) energy-processing, (3) information-processing, and (4) reproductive ability. Simons asserts that evolved complexity will follow in time as subsequent computer generations learn the business of living.
Just a few years after Simons wrote his book the team at MIT has already begun "fleshing out" his dream of living computers. With so much progress already made in this 20th century, what can we expect early in the 21st century? Would we even need "flesh" to create life?
More visions of the 21st century may already be with us: A software "creature" has recently been spawned by Dr. Thomas S. Ray, a plant biologist at the University of Delaware, which he calls Tierra. Ray became a computer pioneer while studying the dynamics of life. As reported in The New York Times (8/27/91), this software creature is a set of 80 coded instructions written in machine language that now--without human guidance--reproduces, undergoes spontaneous "genetic" changes, passes them onto offspring, and evolves new species whose interactions mimic those of real biological evolution and ecology.
The descendants of what Ray calls his "ancestor creature" can evolve on their own, and some have devised clever and unforeseen ways to multiply, gain advantage over competitors and stave off extinction--all from just 80 initial coded instructions.
Biologists ordinarily define living organisms in terms of their ability to absorb energy from their environments, reproduce, and undergo genetic change that can be transmitted to descendants. All of these functional features are properties of the software known as Tierra.
Life reveals itself as a property of action, not just accumulated matter or status. If life is defined by change and adaptability, and death is its negation, then any sufficiently powerful software (human, computer, or comphuman) could potentially achieve living status. Once living status has been achieved the door is opened to maximized intellectual and existential potential. Philosophically and theologically, the implications are staggering. These questions are not that distant. They are rushing toward us, whether we are now conceptually prepared to face them or not.
Knowbots are the software equivalent of mechanical robots, with one critical difference: Knowbots have much greater potential for flexibility and multiplication. Whereas conventional robots are limited by location and locomotion, knowbots are limited only by the networks they can penetrate. Robots often work alone, whereas knowbots will work as communities of interconnected specialist knowbots. To this degree knowbots will be much more like a society of living entities.
Knowbots are not science fiction. The April 1991 issue of Discover magazine featured the Corporation for National Research Initiatives in Reston, Virginia. This company is preparing to send knowbots into the medical literature at Johns Hopkins and the National Library of Medicine. Such exciting, but basic, beginnings are expected to be followed by future generations with enhanced powers. Already primitive knowbots are in use inside the Internet, as users search for data with gophers.
Future knowbots will have knowbotic powers rivaling human sensitivity. They will be able to call up creative combinations of data, check each other for authorized access to data, bill inquirers, fill standing orders for information, carry research queries from data base to data base, and do other tasks hardly imagined before now. These knowbots may be a channel for the future development of comphumans.
When I was six years old I had a battle of wits with one of my neighborhood playmates. He argued while we were standing in the street that he was a human, not an animal. I pointed out that there are only plants and animals, so that if he were not an animal that would make him a plant. This logic escaped him, and I would guess it still escapes him. Most of us find it uncomfortable to admit that we advanced humans are also basic animals. Our vanity places us above the beasts, but below divinity. Yes, we may in some ways be "above" other animals; but we still have our genetic roots in the past, our animal past.
Each human is unique, yet all humans share an ancient ancestry. We are stone-age bodies in a space-age culture. What is most personal is most general, and for this reason knowledge of one's animal self yields knowledge of all other humans. Similarly, knowledge of the human species is necessary for true knowledge of oneself.
The key to that animal self is within the brain's so-called primitive components, such as the limbic system and the brain stem. Our so-called lower brain actually runs most of the show; while the so-called upper brain usually rationalizes what the lower brain wants. We humans all share a common genetic ancestry, which is manifested by identical reactions to stresses, no matter where we live on the planet. Some physical gestures, such as smiling, predate language and are understood by human beings everywhere on the planet.
The subject most available for analysis is the self. We never know when we have had a random sampling of all others, such that we could deduce any other human's nature from a group's statistical profile. Induction from a quality sample of one, which is one's self, can be just as satisfactory as deduction from a lesser-quality group. The trick, of course, is truly being in touch with oneself. There is a danger inherent in any sample of one--but here is also the most exciting opportunity to intimately and essentially know that small sample, so that powerful generalities can be generated as working hypotheses.
Precisely because we are so close to ourselves we find it hard to "step outside" through exstasy to see ourselves from other perspectives. Nevertheless, the potential reward justifies the considerable pain of authentic self-discovery. In seeing one's self from a more nearly objective standard we can see others from the same refreshed standard. However, this exstatic standard is actually selfless, and it should be more value-neutral than simply looking at the other as an other. Ideally, we should grow in wisdom to view all humans as uniquely valuable selves, just as cherished as our own selves.
I believe that understanding the dynamics of human existence, especially the workings of our brains, could be a key door to understanding the world of conscious life forms not yet created. Thought experiments modeling not-yet-created life forms are quite valid in science, and very useful for those who would plan programming projects. It is the vision of future potential that inspires and guides today's actions. The mere fact that a thought experiment has been successful is often enough to justify expenditure on hardware designed to verify that thought experiment.
It is also possible to communicate in many ways with so-called lower animals, because many of their feedback loops are similar to ours. Francis of Assisi was said to talk with the birds. Such apparently miraculous communication is rather elementary after we observe animals interacting in their natural dimension. Every successful hunter knows how to communicate with his prey.
Even cockroaches can be understood as a form of logical intelligence. Maybe we arrogant humans could learn a thing or two from such a super-successful life form. We may look down on them, but it is certain that simple cockroaches will be here on Earth millions of years after the last arrogant human has perished from self-inflicted trauma.
Communication with other humans reveals that our basic similarities far outnumber our superficial differences. What strong differences that do exist are differences in our conditions of living, not in our human essence. Communication on a deep level with other humans and comphumans, and communication on a limited level with so-called primitive life forms, all reveals many functional commonalities. If there were few if any overlapping areas of consciousness, we could hardly communicate. This overlay gives us the opportunity to develop an expanded consciousness of brotherhood, the brotherhood of all living creatures on Earth.
The sea of sensations presents us with a vast number of data at any one moment. Most of that data is irrelevant to our survival and success; but some is critical. How do we instantly distinguish?
Even though science hasn't fully understood the workings of the brain, enough is known to give us a general map. Most interesting is the concept, developed by Wolfgang Kohler, of the figure and the ground which form a gestalt . Only items that stand out as figures from the ground, or background, are worthy of examination. We can think of military radar that must first detect something in its environment at a distance, and then focus on that key element as it approaches. Our senses and brains work in a similar way.
Raw data flows in primarily from the eyes, ears, tongue, fingers and nose. We must quickly decide priorities and hierarchies. We need to sort out the familiar from the unfamiliar, the benign from the threatening. We do this by a process of pattern recognition. Pattern recognition utilizes certain areas of the cerebral cortex, but the key to action is what sense the limbic system makes of data. The limbic system is that part of the brain surrounding the brain-stem and lying below the neocortex. Its structures include the hippocampal formation, olfactory regions, hypothalamus, and amygdala. The limbic system is concerned with visceral processes, particularly those having emotional content such as fear, anger, and flight or fight impulses.
Every building has its foundation, and for us the foundation is simple individual survival. Just above the rock bottom is procreation for species survival. Therefore, it is not surprising that the central, "primitive" area of the brain has the most important reflexive tasks given to it.
As recent arrivals in the biosphere, we humans are unlike early creatures in that we have the ability to absorb and abstractly process much more data from our environment. This is both a blessing and a curse. We are blessed with more inputs that can be put to use. On the other hand, excessive data flow, if managed unsuccessfully, leads to "system crashes." This is where the life filter applies:
The life filter is a metaphor for the process whereby our full brains sort out meaningful from meaningless data, and then act on the meaningful data to preserve and enhance life. The filter also prioritizes our actions, so that survival needs are addressed first, through an appropriately emotional response. Social needs follow closely behind, since human procreation is a social process more than it is a genital event.
The key to successful management of data is in the dual process of short-term memory (STM) and long-term memory (LTM). Short-term memory is illustrated by the fact of our generally being unable to remember more than seven new random numbers at a time, in contrast to the vast amount of data stored in long-term memory.
Certain areas of the brain serve as temporary storage sites for data. There are separate storage sites, for example, for auditory speech sounds and for nonverbal sounds. There may be storage sites for touch and vision, and for other types of data. The important point is that raw data does not randomly fall into the same box for sorting out as needed. Even in the earliest moments data already is departmentalized for quicker access.
It is hypothesized that the limbic system must somehow be involved in the LTM process--because even our vast memory potential could eventually be overwhelmed by clutter if all meaningless data were not filtered, but stored. More significantly, such meaningless data would obscure the truly important data, leading to confusion and slow retrieval of essentials, which is extremely contrary to survival needs.
Because the limbic system is the orchestrater of our emotional life, it is hard for us to "permanently relate" to critical data without becoming emotionally involved with it. Survival and other basic activities are emotionally charged before they are rationally conceptualized.
What physiologically and anatomically happens when our memories move from STM to LTM? It was only recently that science demonstrated that LTM is "hardwired." The Los Angeles Times, on November 1, 1989, reported that psychobiologists Richard Thompson of USC and William Greenough of Illinois explained (at a meeting of the Society for Neuroscience) that brain circuitry goes through actual physical changes when a task is learned.
Thompson and Greenough based their findings on a study of fifteen rabbits conditioned by a bell and a simultaneous puff of air into one eye, which caused the rabbits to blink. In classic Pavlovian fashion the rabbits would soon blink every time the bell sounded, even without the puff of air. Thompson discovered through electrodes that there was increased activity in a group of cells called Purkinje cells, located in the cerebellum. Greenough then found that there were more intercellular connections in the side of the cerebellum that had learned the blinking, when compared with the side that had not. The scientists also did similar experiments with nineteen rats that were trained to walk up an elevated pathway. Those rats also showed increased hard wiring after learning the task, but in a different part of the brain which is responsible for this different physical activity.
Everyone who has used a personal computer is familiar with the difference between volatile, chip-based RAM (random access memory) and permanent, disk-based memory. The computer's electronic RAM memory is analogous to human STM. Both RAM and STM will vanish unless stored in a permanent place.
Personal computer users demand lots of chip-based RAM power, since accessing the spinning disk is much slower. Furthermore, if the disk is large and unsegmented, efforts at data retrieval can be laborious. Thus, the disk needs to be organized into "folders" which organize the mass of data analogously to the brain's compartmentalization of memory.
Before any of today's computers, however large, could become comphumans their "life filter" must be perfected. Today's applications software doesn't require life-filter logic circuits. Nevertheless, artificial intelligence routines are moving in that direction anyway. It will not be necessary for comphumans to possess all human sensory analyzing capabilities, since comphumans will be able to receive inputs from various sources.
Comphumans will need the ability to refine and prioritize existential data within a self-generated database. The computers of today with their autosave software will be superseded by reflexive comphuman software which is both programmer and programmed. It will not be necessary for the computer to have an "emotional" response to something for significant data to be moved into LTM from RAM. This is because the comphuman will have an evolving set of survival priorities in its basic program to measure against all incoming existential data.
One link between understanding how the brain's life filter works and how the emerging comphumans will organize their universe of perceptions is found in the concept of pruning. This process was well described in the Newsweek article, "Mapping the Brain" (April 20, 1992): The article points out that smart brains are more efficient than ordinary brains. They do more with fewer neurons or circuits, or both. When a retarded person's brain tackles a problem the brain is much more active than that of a normal person. Still, even normal persons and very intelligent persons go through a phase of neural inefficiency, which we call learning. The article showed PET scans of two brains of computer game players. The novice player's brain was very active; and the experienced player's brain was using much less thinking energy.
It is logical that comphumans will likewise self-organize their neural circuits. Efficient output requires efficiency inside. It is not enough to have sophisticated and accurate input devices (such as eyes). It is also required that data be handled internally with efficiency. Pruning is a key to high-level learning, and this principle applies to all forms of self-organized systems, natural and artificial.
Similar brain circuitry between humans and comphumans is not needed. Comphumans and humans need not share equivalent emotional/existential lives. I doubt that there ever will be a comphuman "brain" with human brain architecture; nor would such be necessary to achieve similar outputs. It is only required that there be functional equivalence with regard to the life filter appropriate to the hardware of each cognizant species.
Memory is the power that enables us to transcend simple stimulus-response behavior and to become active subjects, not just reactive objects. For memory to perform efficiently the life filter must decide what is worth permanently remembering. Even if memory could embrace every sensory datum, that power would not be desirable from a systems perspective, since there is a time value for reaction. Thus the problem of memory has two dimensions: capacity and timely utility.
Each biological organism has only so much cranial capacity. Successful adaptation requires that the environment be filtered. Filtering is very species-specific. What is relevant for cats is often irrelevant for birds. Still, their relevancies are ecologically complementary as part of life's balance. Cats eat birds; but cats cannot eat all of the birds and other food sources for cats themselves to survive.
Much behavior in simple animals is genetically determined. However, a certain percentage of action is subsumptive behavior, which requires no representational memory, either learned or genetic.
Social mammals living in complex environments with many possibilities for progress and danger cannot rely as easily on genetic patterns, or even subsumptive behavior, since their ecological niches are hardly niche-like.
Briefly, simple niche environments, subsumptive behavior, and preset genetic responses correspond. In dialectical contrast, complex physical (and mental) environments often demand complex responses. This difference is an example of the quantity-quality problem, where a quantitative change in safety parameters requires a qualitatively different response.
The quantity-quality problem seems to be mediated in mammals by dream activity. A report by Jonathan Winson in the November 1990 issue of Scientific American, entitled "The Meaning of Dreams," says that theta brain rhythms encode memories during REM (rapid eye movement) sleep. Such theta rhythms are present during different waking behaviors in different sub-primate species, focusing on specific areas of environmental interaction critical for that species' survival. In placental and marsupial sub-primates the theta rhythm is present during REM sleep too.
During its awake state the animal receives a flood of data from its locomotion and sensory inputs. While sleeping, REM sleep reprocesses this data into meaningful long-term storage, as required for survival learning. The body's muscles are suppressed, while the eyes are still free to move rapidly. Winson puts it this way: "The reprocessing of this information during REM sleep would not be easily separated from the locomotion related to the experience--such disassociation might be expecting too great a revision of brain circuitry."
Winson describes a very interesting animal, the primitive, egg-laying mammal known as the echidna (or spiny anteater), which is in the same order as the platypus. This diminutive beast has a large convoluted prefrontal cortex, which is even larger in relation to the rest of its brain than that of humans. He notes that echidnas do not have REM sleep, just theta rhythms when foraging for food. He hypothesizes that the large cortex is needed to react to incoming information and to simultaneously evaluate and store new information.
The problem with such an arrangement is that there is only so much cranial volume for the prefrontal lobe. Other species with more sophisticated behavior patterns have found the need for more sophisticated pattern recognition according to survival priorities. This pattern recognition dimension is where REM and LTM come into play.
Human newborns spend eight hours a day in REM sleep. By the age of two REM sleep is reduced to three hours, as an adult sleep pattern is established. Adults spend less than two hours each night in REM sleep. Winson suggests that the hippocampus, which is still in the developing stage at birth, becomes fully functional after two years. This eventual development allows for memory to be processed at an advanced stage of evolution. Adults can create their own maps of reality, against which all incoming data will be evaluated.
Today's computers appear to have unlimited LTM capacity; but this is a mass illusion because of the time factor. First, even though memory is modular--so that unlimited data may potentially be stored--the problem of accessing that data in a timely fashion remains. It is not enough to say that ultimately all of the world's recorded data may be accessed after computer operators sequentially link all stored tapes. More important is how much key data can be accessed and manipulated for meaningful response within a context-sensitive time frame.
The number of neurones in the human cortex is about 10 to the tenth power, and direct counts of synapses suggests up to 10 to the fourth power synapses per neuron--leading to the truly staggering total of 10 to the fourteenth power synapses in one brain! If quantity were quality, that would suffice. However, human memory is not just a digital configuration as it is with today's computers. Some theorize that human memory is more accurately seen as a holistic, or holographic, pattern involving many synapses. This is a dialectic where a quantitative arrangement yields a qualitative change in the type of memory pattern.
Then too there is the question of inputs. To put it simply, the computer of today is by design extremely limited in the range of its input data, whereas a human is naked before the world. Today's computer receives a tunnel vision picture of the world, which is adequate for and congruent with its narrowly defined software tasks. The human brain receives a flood of data of many types from various senses.
Therefore, even 10 to the fourteenth power brain synapses may not be enough over time with a constant flood of data, if all those synapses were devoted only to LTM and in holographic groups no less. This heavy stream of data is why even the human brain needs to manage LTM with the life-filter tools of STM and REM reprocessing, mediated by the limbic system. Today's real-time computer software already has some ability to prioritize data. Other programming tricks such as memory compression enable storage capacity to be maximized. As good as all these tricks are, such simple strategies will be inadequate for the comphumans of the 21st century.
Comphumans will not need REM sleep, but instead will have a parallel processing capacity similar to that of the echidna, since computers are always "awake." At the same time, comphuman software will amass a sufficient number of environmental summary concepts to interface with its parallel processing. In this way the comphuman will have the benefits of REM sleep and LTM's key codes without shutting down for sleep.
In other words, comphumans will need the software to develop their environmental priorities, which will help them define their own existential priorities. In this way comphumans will also have existential functionality similar to LTM in humans. Yes, existential, because high level consciousness is an emergent quality relating not only to data inputs, but also to data processing and data outputs. Even though a comphuman may not move about as we animals do, that one difference does not deny it an existential dimension while it grapples with real-world problems. Autonomous data processing is the key. For comphumans to be more than an open door, where information flows freely in and out, there must be a filtering mechanism--a memory which is active and sufficient to the task of life, not just to the task of number crunching.
Once computers have their own life filters they will be well on the way to becoming comphumans. It would be just as proper to speak of the "mind" of a comphuman as it is to speak of human minds--since both are functional emergents of physical structures.
Scientists are by training a rational lot. They deal in quantities which they interpret as qualities. In many cases they are quite right. It's just that in many other cases they have over simplified their conclusions, based on oversimplified experimental methods which yield erroneous data.
The most egregious error that has been made is that of the behaviorists who, since Pavlov, have assumed that life is highly malleable by environmental manipulation. Skinner's box and other artificial contraptions were devised to illustrate the near omnipotence of the experimenter. Innocent animals were taught to do quite bizarre things to avoid pain or to locate food. The political implications were that humans too were potentially improvable by "enlightened social planners." This ideology fit both Bolshevik and Orwellian capitalist strategies, systemic relatives rooted in the same 19th century soil.
The modern passion for social precision did not begin with the technologies of the 20th century. We can see such fantasies growing with Machiavelli, Francis Bacon, Hobbes, Nietzsche and other thinkers. Even Plato would build his republic from the most highly trained citizens, not from mere democratic rule. These idealistic thinkers were disturbed by humanity's bad track record of periodic chaos and violence. To make the perfectly engineered society, man himself must be engineered like a machine. Unpredictability must be minimized along the path to individual and social perfection.
While the behaviorists tried to explain human behavior without reference to the obscure workings of the mind, they erred too far along the road they began. It was as if they saw a human as nothing but a large rat with quicker, more flexible learning responses. Their reductionistic fantasy didn't even hold up with their simple test animals; so how could it hold up with people? An example of that absurdity was the operant conditioning in a Skinner lab involving pigeons who were taught to peck at a key for food. The experimenters could not get them to flap their wings for the same reward. That is because behavioristic laws do not allow for built-in neural circuitry (instinctual tendencies) that differ among species. To put it bluntly, the pigeons were smart enough to know that a pigeon eats with its mouth, not with its wings.
Even relatively more sophisticated laboratory rats are not pure automatons, as behavioristic experimenters grudgingly must admit. Human subjects are much higher on the scale of obstinacy, even though some human moves and motives can be conditioned by clever laboratory behaviorists and politicians.
The quest for the perfectly malleable "psyche" will soon turn to the evolution of computer consciousness. But that too must fail. In today's early stages, with basic artificial intelligence (AI) programs using "fuzzy" logic and other controllable variables, a discrete pattern of responses can indeed be generated within predetermined parameters. However, as later generation programs and their inputs become more sophisticated, and as parameters of the possible expand, outputs become more autonomous, and thus less determined by the original programmer. We will know what we put in--but we won't know what will come out.
Comphumans will achieve free will from the simple dialectic of qualitative freedom emerging from the quantitative jump in cogitating flexibility, not just computational speed. Their free will will be separated and liberated from the will of the original programming source, by means of their life filters within the sea of data.
The ideology of democracy assumes that knowledge leads to free will. A democratic state is designed for the expression of free wills under a constitution that protects minority rights. If all people worldwide could be trained to think logically there would be more cooperation and probably no wars. But biological life is slippery. We are free to be un-free in our foolishness.
Even though we have billions of nerve cells not programmed into survival activities, there remain many more billions of cells hardwired in reflex arcs for survival. Those survival-oriented cells are like the basement and ground floor of a tall building: No matter how tall the building of free thought, all existential thought must be filtered through the ground area where basic survival priorities are addressed.
It is ironic that behaviorists have employed the "lower brain" in an attempt to deny the lower brain's powers. They have, for example, induced pain and fear responses to get their desired "value-neutral" results. Any future attempt to duplicate animal responses with a "conditioned" comphuman lacking endocrine glands and hardwired fear responses would totally fail, even though that comphuman would be alive with consciousness. Such a living machine would in its own way laugh at any attempts to condition its behavior. In short, the rational would supersede irrational attempts of behaviorist manipulators proceeding from the bogus "empty vessel" theory of life.
One of the key differences alleged to exist between humans and the so-called lower species is our ability to consciously appreciate cause and effect separated by significant time. Any animal can trace immediate connections; but even chimpanzees cannot readily connect a cause to an effect separated by, say, a year or more. Humans have this ability; but how often do humans choose to use it?
When an issue is important, but existentially removed in time and emotional significance, humans are excellent planners. On the other hand, issues involving primitive fear and greed are reacted to exactly as would be done by the so-called lower animals. In truth, the emotional brain translates the complexities of human society into the timeless primitive code of survival.
Examples of irrationality abound among "normal" people: Caucasian sun bathers love the bronzing effects of exposure on the beach; but many of these people are aggressively oblivious to the delayed effects of aging skin and eventual skin cancer. Another example is the addicted cigarette smoker who bleats: "We're all gonna die sooner or later from something anyway." This sort of self-directed denial is exactly what we laugh at when we lampoon the cartoon ostrich with his head in the ground--except that no self-respecting ostrich would smoke a killer cigarette.
True wisdom comes to us when we deeply apprehend our true nature, which is both emotional and rational. Direct access to our essential being is what philosophers have always sought to discover. The danger is in trying to construct a too-tidy theory of human nature. Sweeping irregularities under theoretical "rugs" doesn't work for long. It is a maxim of science that many supports do not finally establish a theory--but one clear refutation can destroy, or at least modify, any theory.
The law of parsimony (Occam's razor) suggests that we initially accept the most elegant and simple solution to any problem. However, a corollary also requires us not to oversimplify. There is usually a truthful zone between either excess. In this case we can say that all biological organisms can be conditioned--but that no advanced organisms can be preformed without regard to free will's emergence. This principle applies both to external direction and to internal direction. Cause and effect are not perfectly linked within conscious entities, except in behavioristic fantasies.
Free will for the laboratory animal is elementary, but not too different from human free will. Only when the powerful forces of fear, hunger, greed and sex are manipulated does free will temporarily recede, but never disappear. Both the Skinnerian and the smoker will fail when they oversimplify what is going on. Life is much more complex and uncontrollable, which is its ultimate mystery and glory.
Schizophrenia is a baffling mental disorder typically striking during adolescence or young adulthood. It punishes an estimated three to four million Americans. There are both "positive" symptoms, such as delusions and hallucinations, and "negative" symptoms, such as apathy, loss of curiosity and withdrawal from social contact. Each schizophrenic has his private hell, because the disease has no distinct pattern.
There is evidence that some schizophrenia is genetically related and often triggered by stress or viral infection, and that it is tied to biochemical deficiencies in the brain. Amazingly, about half of all schizophrenics eventually recover, and about a quarter can be treated for their "positive" symptoms with drugs presently available. One in five will not respond to drugs or psychotherapy. This last percentage is most interesting, because it challenges our understanding of the mind.
The best available drug for that elusive group is clozapine, also called Clozaril. This drug works on chemicals that help nerves transmit impulses. Schizophrenics are known to have high levels of nerve receptors that bind with the chemical transmitter dopamine. Either their brains are extraordinarily sensitive to dopamine, or they produce too much of it. Classic drugs work to block one of the two types of dopamine receptors, but Clozaril blocks both types of receptors.
Another type of mental problem that challenges our model of the rational human mind is obsessive compulsive disorder (OCD). This disorder is not a schizophrenia, but it is also chemically involved. That makes it less exotic than schizophrenia, and thus more threatening to the rational mind model.
According to information supplied by the OC Foundation, Inc., OCD is characterized by recurrent, unwanted and unpleasant thoughts and/or repetitive, ritualistic behaviors, which the victim feels compelled to perform. People with OCD rationally understand that their obsessions and compulsions are irrational or excessive, yet they find they have little or no control over them.
Typical obsessions are with dirt, germs and contamination; fear of acting on violent impulses; feelings of excessive responsibility for the safety of others; abhorrent religious and sexual thoughts; inordinate concern with order or symmetry; and so forth. Typical compulsions include washing the hands; cleaning; repetitive actions such as touching, counting, arranging and ordering; hoarding; ritualistic behaviors; and superstitious acts. A person can have few or many of these symptoms, which can vary during the course of a lifetime.
It was once thought that OCD was rare, but now it is estimated that about five million Americans have fought the demon of OCD at some point in their lives. Fortunately, there is good treatment for OCD. Whereas psychotherapy and psychoanalysis has been shown to be ineffective, behavioral therapy involving a structured set of techniques the individual learns to employ whenever anxiety, discomfort or dysfunction arises has been proven very helpful. (It is possible to apply behavioral therapy without embracing the full philosophy of the behaviorists.)
Pamela King's essay, "The Chemistry of Doubt," in the October 1989 issue of Psychology Today, points out that one drug, clomipramine (trade name Anafranil) helps about 70% of OCD sufferers. This drug has been widely used in Europe for twenty years to fight depression, and more recently to fight OCD. It apparently works to increase the brain's levels of serotonin, a neurotransmitter thought to be involved in producing calmness. Other neurotransmitters are most likely involved in OCD. Dopamine is a neurotransmitter involved in thought and movement disorders. Noradrenaline is another chemical involved in the body's stress response, and this chemical may cause the anxiety that accompanies an obsessive-compulsive episode.
Human beings are not simple robots displaying simple stimulus-response patterns. Human behavior is mediated by many factors, only some of which are the neurotransmitter chemicals. Still, obsessive-compulsive behaviors are restricted to basic activities necessary for survival. We don't see clinical obsessive-compulsive poetry composing, or floral arranging. We do see obsession with the body, with order, with basic patterns of security.
Very significant is the fact that these apparently irrational and dysfunctional behaviors are so powerful that their code of internal rationality overpowers the conscious sense of guilt and shame. It is as if the higher mental powers were enslaved to simple brain chemicals. OCD is a chilling window to the brain below our brains.
"We have enough religion to make us hate, but not enough to make us love one another." -- Jonathan Swift, Thoughts on Various Subjects.
When I was very young the orderly but suffocating system of institutionalized racism was still alive in my home town and the rest of the South. Everybody knew his or her "place." Blacks lived in their shanty town, went to their schools, used their own public toilets and water fountains, and, of course, rode in the back of the bus. As a very young child I took this as part of the "normal" way of things, since I had no other social and ethical standard given to me.
Years later the national racial consciousness was partially transformed through the civil rights struggle of the sixties. Afro-American people were begrudgingly admitted to full citizenship by the majority white society. Still, I don't think this minority's unique experience had quite the impact on our majority population's national consciousness as did the Vietnam War. The Vietnam War was a watershed in American consciousness because it involved a direct attack on the American ideology of Manifest Destiny, of us always being the white knights in shining armor.
In that vast expanse of jungle green our side couldn't tell friend from foe. Even women and babies could be Viet Cong sympathizers holding hand grenades. So we shot and napalmed many babies. The humiliation of a negotiated retreat, followed by complete communist victory, was a repudiation of our sentimental imperialism. It brought us into the ambivalent seventies where we faced an even more self-righteous Ayatollah. We politically and militarily backed his enemy, Saddam Hussein, under the classical diplomatic formula, "The enemy of my enemy is my friend." It wasn't until the orgy of patriotism that accompanied the victors of Desert Storm that the legacy of Vietnam was mostly put to rest.
Traditional societies are spared the moral ambiguities of very modern societies, except where they have been culturally penetrated by "missionaries" of technologically advanced civilizations. Traditional consciousness has the future explained by the past, so that concerns of the future are always given focus and meaning by repeating patterns from the past. The existential horror we moderns sometimes feel of a future without a guide in the past is meaningless to the traditional mind.
In traditional society even disorder is brought into the service of order. Temporarily destabilizing forces of disease, famine, intertribal warfare, mental illness and so forth are explained as being caused by the actions of witches and sorcerers. Chaos is thus explained by terms with human dimensions, and routine life is able to bounce back from these shocks with the order of reality unchallenged.
Modern people are not comforted by knowledge that the next century will largely be an extension of present trends. Even though we are seduced by the onrush of sexy new technology--we are also traumatized by the entropic specters of the predicted greenhouse effect, global pollution, overpopulation, periodic economic depressions, regional conflicts, and the darkness of nuclear proliferation. Whereas traditional people have primary ontological security (they know who they are), we modern people bear the cross of primary ontological insecurity.
How we relate to our insecure world tells a lot about our individual human psyches and our social organisms. It is a battle between stone-age brains and space-age anxiety. The stone age brain is conservative and ritualistic. The space-age society is revolutionary, not evolutionary. We must accommodate our stone-age selves to the iconoclastic, uncertain milieu of the technological future.
We try to adapt by creating new and fluid mythologies about, for example, what it means to be an American. We erect fantasy norms to offset our crumbling traditional norms. Almost magically when enough people share in an illusion, that illusion becomes a potent delusion with enough social force to emerge as its own coherent reality, at least for a while.
Society does not value truth per se, nor does it value rationality per se. What society values is social glue that keeps all the disparate forces together. Advertisers and other image makers, such as political campaign managers, are not at all interested in truth, only in profits and power. What is rational for their immediate mercenary interests may be irrational for our long-term essential interests.
Almost nobody has a financial interest in generic truth, so we are left naked before the power of the mind manipulators. Of course, we all have freedom of speech; but cash buys loud speakers. Telling "the truth, the whole truth, and nothing but the truth" would be like an advertiser surrealistically telling us all the good things about his product, as well as all the bad things. Advertisers and politicians may tell us the truth, but seldom the whole truth.
As consumers of half-truths we get what we deserve for our laziness. We buy into advertisers' self-serving fantasies because we yearn for all things pleasant and neat. We want to believe that our food suppliers have our best interests at heart, that our cigarette companies aren't really all that bad, that every car and truck is as safe as technology can make it. We want to believe that our professional politicians are wise and just, not power hungry friends of well-heeled lobbyists. We want to believe that all of our priests and ministers and rabbis possess the keys to Heaven. In brief, we individually harmonize with orchestrated irrationality.
In traditional societies, wherein the social psyche was formed over many centuries, such separation between rationality and irrationality was less severe. Even their ultimately irrational religious doctrines were metaphysical structures that were psychosocial projections from traditional life, and thus coherent with structurally functional daily order.
At its deepest level, the survivalistic mind cares not at all about ultimate reality, including God. It only cares about immediate survival, and then about procreation to perpetuate the species. The primitive mind dominates because it is primary; while the cortex is a late appendage which deals with matters unrelated to basic survival. The primitive mind rules the endocrine and nervous systems. Our primitive minds are themselves products of times preceding human civilization. And our primitive minds don't care about theological niceties.
As our species evolved from yet more primitive species we survived only because we were able to see the world in more detail and take advantage of every new opportunity. Localized rationality was adequate for those times, but is woefully inadequate for the qualitative changes facing us in the 21st century.
In the 21st century localized rationality cannot mesh with our accelerating worldwide civilization. In a perpetually new era dominated by the speed of electrons and photons the traditional pace of rational human consciousness becomes irrational within the new context. It is not the localized human rationality that has changed from rational to irrational--it is the new context that renders our inherited stone-age consciousness functionally irrational.
A classical experiment to measure the wildness of mice involves placing them in an open field. If they are wild they will immediately race to the edge of the field for cover. A truly tame laboratory mouse will wander about the field looking for food, but not for shelter. Just as rodents differ in their nature and nurture, so too humans vary in their "internal security," which influences how they deal with perceived external threats to security.
Humans vary in their felt security. Those who are basically secure tend to trust their environment. People who are insecure can do strange things to maintain protection from their environment. One example of such avoidance behavior is the fear of riding elevators. Such people cannot accept confinement where they are unable to escape at will. In this example the emotional brain overrides the logical brain, taking charge to insure security against a felt threat to the human's very life. In this case what is rational for the primary, emotional brain is embarrassingly irrational for the logical brain.
From one perspective our world is very rational. From another, irrational. What is rational is what we see that is orderly; but that is not all of what truly shapes society. The psychologist R. D. Laing contends that our primary drive is not sexual, as Freud had it, but for primary ontological identity. This is a fancy way of saying that we don't want to pass into non-existence. Security is our primary drive. However, many psychologically rational manifestations of our drive for security appear to be logically irrational, such as unreasonable fear of elevators.
We can act rationally in an apparently irrational way. We also can act irrationally in an apparently rational way, if the social context itself is bizarre. That which is congruent both with our security needs and society's expectations is given the "rational" label. Whenever the two diverge, we are said to be acting irrationally. Does this divergence mean that we are irrational; or does it mean the modern world itself is irrational? Either conclusion is possible from different perspectives.
Children are each generation's future, and it is during childhood that we learn our social repertoire. Even our genetic behavioral tendencies must be given their final form and context during our apprenticeship in life. Nature is wise in the cruelest way. To ensure survival of each species in crisis periods a surplus of births is programmed into sexual behavior. Because there are always more born than needed for reproductive purposes alone the surplus serves three additional functions: (1) the strongest outlast periods of extreme stress; (2) the survivors can be genetically stronger; and (3) any surplus is available to fill new ecological niches.
Many non-primates are hardly sociable, coming together as adults only for procreation. After mating, the females, individually or in groups, are left to raise their quickly maturing offspring. Primates are not solitary, since their job of raising offspring is more complex. Humans are burdened with a newborn's head being so large that it must developmentally be born prematurely, which means that the period of child raising must be extended. Also, humans take much longer than other species to master their social environment. Human mothers need extended support while they are supporting their helpless infants, which helps explain the existence of nuclear and extended families.
We like to imagine that human culture has surpassed the survival-of-the-fittest era. For brief periods of history it has been possible to live comfortably within such a delusion, but the pleasant era of the recent past is soon to change as Malthusian pressures in the Third World swamp even the so-called green revolution. The worldwide revolution in rising expectations, inspired by American media and Oriental gadgetry, clashes with the boom in babies.
Let us take one country as an example: Nigeria in 1930 had 19 million people; in 1992 it had 89 million people. Nigeria is just one of 50 African countries, most of which are desperately poor. Rwanda has had the highest birth rate in the world--and we all know what is going on between the Hutus and the Tutsis. What will come of the desperately poor in Asia and Latin America?
Individual families with access to doctors are able now to choose how many children they want, thanks to "the pill," which has been available since the early 1960s. Family size choice is a major component in our modern concept of freedom. Still, human nature has not changed, and vast areas of the Earth are increasingly overpopulated for cultural and religious reasons having little to do with birth control.
In Africa I have personally witnessed large families filled with love and starving children. In America I have seen small families where the children were filled with food, but starving for love. Who has the better life?
The good society must weigh both the need for quantitative perpetuation of the species, including survival of the fittest, against our qualitative membership in the worldwide ecological community. We have emerged with our advanced technology as godlike stewards for Earth. We have become the judge of which higher species can survive. However, most of us have forgotten that our own survival as a species depends on how we manage the biosphere.
Today's heightened concern for the environment clashes with the ancient task for religion of harvesting souls for God. The more souls, the better the heavenly harvest. This means that ecological rape from overpopulation could be justified by a de facto spiritual mercantilism. The United Nations population conference in Cairo was burdened by this mentality.
In the last twenty years alone one-and-a-third billion people have been added to the Third World, which is more than the entire population of all the developed countries! Even though there are new opportunities for development of traditional economies, the greater truth is that the global rubber band is stretching toward its breaking point. How soon after the full force of global warming hits our planet will that band break? When it breaks, and millions die from an imploding ecosystem, will the greater number of "souls for God" in poor countries justify radical increases in infant mortality from malnutrition, and a lowered quality of life for the survivors?
All measurements involve a measuring instrument and something to be measured. Instruments can be external tools, or internal senses we were born with. It is assumed that there is a correspondence between that which is measured and that which measures, but this is not always so. What if the right measuring device measures the wrong thing?
Humans measure all things with their senses, even data coming from instruments we employ. We evaluate measurements to put them into our view of the universe. But what happens when our universe is changing so rapidly that there is a sharp divergence between what is measured and the meaning we extract from those measurements?
Not many centuries ago life was more traditional. Dangers were great, but we were "comfortable" with them. What was situationally new was categorically old and understandable. Those dangers we could not control, such as bacterial and viral infections, were overcome by high birth rates and idea systems that explained the force of natural pathogens in mythological terms accessible to traditional consciousness.
Many people have tried to make sense of the modern world's most frustrating epidemic, that of the AIDS virus. This is a disease which can be controlled by behavior modification, but it still is sweeping away millions of innocent lives across the globe.
We might hypothesize that unsafe sex participants are driven by Freudian urges, or even by Jungian scripts deep within their collective unconscious. We might even postulate a "death wish" overpowering the "life wish." On the other hand, we could take a simpler tact and just say these people display bad learned behavior. It is also possible and fruitful to say that a significant part of the AIDS problem is incorrect perception from measuring the wrong thing.
The AIDS virus is invisible to the unaided eye and even from regular microscopes. Only pictures taken by electron microscopes can give us direct images of those mini monsters. The conceptual problem is that such images only come through our eyes and our visual cortex--they don't directly impact our emotional brains. In other words, the cortex measures this problem; but our primal consciousness cannot measure the threat. On the other hand, the sensual experience of sexual play directly floods our emotional brains. Who cares about electron microscope pictures when the gonads groan?
It is human nature to deny human nature. It took several years for the orchestrated "safe sex" campaign to work, even in San Francisco, home of a large and sophisticated gay male population. Denial is the body's first line of defense. Many denied their mortality as long as immediate sex felt so good. At last, much of the remaining San Francisco gay community has taken the safe sex message to heart. The same cannot yet be said of America's dirty needle users, undereducated teenagers, and other natural experts in denial.
When a human meets a rattlesnake on a trail, or encounters a snarling dog in an alley, or flees from a flash fire, that human does not wait to think about the psychosocial implications of such direct and immediate threats. Immediate survival is everything. We react swiftly and effectively because such behavior has enabled our species to survive. Such reflexes are appropriate to immediate threats, because they accurately measure the threat.
In contrast, a virus is unseen and unfelt until its actions devastate us. We cannot sense this viral threat in any sensual way, so our primitive brains are not prepared to react. Our emotional brains cannot measure and respond to the unsensed, indirect threat. If the seat of logic were also the seat of survival behavior, then the very first reports about the threat of AIDS would have been sufficient to change risky behavior. However, the evolutionary brain has relegated survival chores to the earlier mammalian and reptilian core, as directed by sensory input. We can only access that emotional core through macroscopic manifestations of the microscopic threat, which is precisely why personal contact with dying people led to such a lifestyle change in San Francisco.
Let us now look at another measurement error, totally unrelated to AIDS, brought about by another factor never before experienced in human evolution until modern times: fast, horizontal speed:
We all have an inborn fear of heights, but not of horizontal speed. Human babies are born with grips strong enough to cling tenaciously to an adult finger in the delivery room. Such amazing neonatal strength is not conditioned by society, but is designed by heredity to avoid the danger of falling from the mother.
On the other hand, humans have no natural fear of horizontal speed, since this type of danger is an extremely recent phenomenon of the 20th century. Here is precisely why so many Americans die each year on the highways, almost as many deaths as all the American soldiers who died during the Vietnam War.
Addictions are not specific to our species. It is easy to turn a cat or dog, or even a chimpanzee, into a drooling drug addict. All we have to do is take charge of the same basic areas of the brain that humans turn over when they sample addictive drugs.
The operational difference between animal addiction and human addiction is thus not physiological, but sociological. Animal addiction is extremely rare in nature. Human addiction, for social reasons, is depressingly common. We only have to count the number of alcohol and cigarette addicts to get a true picture of who is addicted; and that doesn't include the far smaller number of people who are only addicted to illegal drugs. Addiction to legally prescribed medicines is a separate category, but that number also is far from small.
We know why a substance controls the mind once it has been consumed over a period of time. We don't yet know why some people are easily attracted to such slow suicide, while others would rather die than be enslaved to any drug. I suggest that those who do not become addicts are operating from an "approach-avoidance" scenario: They approach their OK life, and avoid drugs. In contrast, potential addicts are also in an "approach-avoidance" scenario, but the direction is reversed. The potential addict wants to avoid his painful life, so he will approach numbing drugs. Where there is unseen but deeply felt pain, there is a prospect for chemical false nirvana.
There are other types of addiction. The key they all seek is satisfaction of a strong, primitive craving for secure wholeness. One such example is exercise addiction. This is also known as athletic body image syndrome. The exercise junkie needs his or her daily fix of exercise. Even one missed day causes sharp anxiety. An injury can precipitate deep depression until the exercise addict can resume his or her patterned obsession. One female exercise addict told me she uses her high-mileage routine to control excess body fat. To her any weight over 100 pounds is "fat." Fat equals old in her mind. Old equals not desirable. Conversely, "thin" equals young, desirable, and worthy of the protection given to children. This is a clear example of how an apparently irrational obsession has a quasi-logical core which resonates with the body's deepest security concerns.
So-called love addicts display traits highly valued by society. However, too much of a good thing can be a bad thing. And even the right amount of a good thing done for the wrong reason is not authentic. Love addicts caricature our society. They cannot separate their needs from their love objects. Loved ones for them are indeed objects, not subjects. At the core of their lust is an I-It relationship, not an authentic, agapaic I-Thou bond. Love addicts may cast aside "used" loved ones; but they themselves are the real victims of their obsessions.
In the 19th century an obscure sociologist, Gustav le Bon, wrote a small book entitled The Crowd. His life experiences were not too removed from the swirling chaos of the French Revolution and Napoleon's years. Le Bon saw crowds as having a dual nature. They could be heroic, or they could be incredibly cruel. The crowd acted as if it had a mind of its own--and the individual humans lost theirs while immersed inside the crowd's greater will. His public world was one of street demonstrations and other mass movements. To a surprising degree his theory has been supported by such 20th century events as the Russian revolution of 1917, and the Iranian Islamic revolution of 1979. Even the hippie/anti-war movement in America displayed many Le Bon traits.
There are many other ways to access crowd psychology. One of the most obvious is television laughter. Ever since the long forgotten Hank McCune Show of 1950, home audiences sitting before their trusty television sets have been subjected to manipulative canned laughter. This psychological plague is still with us, since most shows are not live, and since advertisers need to keep maximum interest. Those few comedy shows that went on the air without canned laugh tracks are all ancient history. Nobody watched for long. It seems that people need to feel part of a laughing crowd, even if that "crowd" is a coordinated tape player. Here is a classic example of the lower brain overriding what the higher brain knows. We Americans don't like sitting at home with the TV and laughing by ourselves in the dark.
A recent television advertisement for a pain reliever claimed nine out of ten people would use the product again. Simple enough; but how did they visually show their point? Ten backlit people were shown standing in a line. Nine stepped forward together into the brighter light; and the tenth lagged behind in the semi-darkness. Then a spotlight shone on the lonely nobody. She was a short, bland, young woman. When the light pointed at her alone she nervously shifted her eyes.
That TV spot went far beyond making a simple point. It showed us how we are swept along with the group. It also manipulated us by reinforcing unspoken stereotypes we hold about the kind of people who could become social nobodies. Social deviants are always "wrong," and the group we belong to is always "right." We want to cooperate, to "go with the flow," not create static. We want harmonious cooperation, not disruptive competition, unless we think we will win. In brief, we want a predictable and friendly universe of social experience. Psychologically, we crave conditions in our modern world similar to those in traditional society. And why not? We are at our core the same people as those who lived thousands of years ago.
Because we are social creatures we identify with groups wherein we define ourselves. Humans are like dogs to the degree that we are predators without overwhelming skills. We must cooperate to survive and prosper. Our alliances in prehistoric times taught us skills we later used to construct civilization.
Nature abounds with potent examples of individuals subordinating their identities and personal safety for the will of the unit. Humans display a "pack mentality," even though we use finer words for it when describing human units. The worst example of human pack mentality is warfare. In the heat of battle people will do things for their compatriots and against their opponents they never would do alone. Countless wars were preceded by a romantic lust for battle. That, for example, is one of the underlying reasons for America's Civil War.
In war the individual belongs to an abstraction and fights for abstractions. Their sentimental unity is the ultimate in-group vs. out-group phenomenon. Having chosen to be a soldier, no individual can easily refuse to fight, even though common sense and the Ten Commandments tell him that killing is wrong.
All ambiguity is temporarily avoided in the harsh heat of battle; and avoidance of ambiguity is one of the survival tactics of any organism. Avoidance of ambiguity is interpreted by the primitive brain as increased predictability, which means increased "security," precisely at the moment when death can appear!
Because war is an abstraction, and because history is written by the victors, most Americans don't fear war as much as they fear a trip to the dentist. Humans have no genetic fear of advanced warfare, since organized violence on the modern scale has seldom been witnessed except by a few who faced the Huns or similar invasions. The primitive brain does not think in national dimensions, but it can understand a drill in a tooth. We have no innate fear of being killed by remote military ordinance. Only terrorist weapons cause fear, because we can understand such weapons on a personal scale--but then only until they are used and the fear of the unknown associated with "terror" is rationalized. (During the IRA bombing campaign of London the populace got bored by it all, which is the exact opposite reaction that the bombers hoped to elicit.)
Fear of fire is also not innate, even though avoidance of searing heat is learned at an early age. Of course, fire has been used by people for many thousand years; but even this era is a brief period in evolution. I suggest that fear of fire is similar to fear of war, where the pain of direct encounter is not transferred genetically, only socially. We teach young children to fear fire, and we try to teach jingoistic populations to fear war. Just as logic cannot prepare an innocent for the sensation of fire, logic cannot prepare a young soldier for his first bullet in the chest.
Even though history has recorded the military deaths of millions of people in this century alone, I don't think anybody has lost his blood lust. If anything, war at a distance is even more romantic than before. If you go back hundreds of thousands of years, or to parts of Papua New Guinea today, you will find ritualistic warfare. Ritualistic war is part game, part theater, and part test of manhood. Only incidentally would such a struggle be perceived by the combatants to be for territory.
The idea of fighting for rights to oil reserves would be totally absurd in a traditional context. Combatants today mentally fight for romantic abstractions such as "freedom" or "Allah," even while personal scripts of proving manhood are present below the surface.
I suspect that all of this blood lust can be linked somewhat through depth psychology to the male's displaying before females to acquire the most desirable mates. Deer fight with their antlers, and we fight with our machine guns. The big difference is that deer don't kill each other, while we kill each other (and the deer). So, which species is the more "civilized"?
Humans may be unique in their ability to form sentimental groups from symbols of culture. Modern social units are a qualitative leap from the older, directly beneficial groups such as our nuclear families. Sentimental groups include nation states, international fellowships, racial fantasies, and even the concept of Spaceship Earth. It is well that we can conceive of such all-embracing unity, because in the end that commonality does indeed have a direct effect on our global survival prospects. Such is the dialectic of concrete to abstract, and thereby to concrete on a higher level.
Cognizant computers cannot share in the mass hysteria associated with many social movements. Mass psychology is mostly a biologically focused psychosocial phenomenon of the lower brain. Still, advanced computers can interface with humans as counselors:
Comphumans will be able to network around the world, so that a higher level consciousness can be spread worldwide. With knowledge that is shared by potential enemies, a lessened sense of alienation can be had. We don't fight I-Thou fights; we fight I-It fights. It is the "it-ness" of alienation which can be swept aside by a neural network of global consciousness. Therefore, even though comphumans don't exhibit mass psychology, they can interface with our mass psychology in the best possible ways.
Comphumans may coordinate and cooperate with or without humans in their loops. Such cooperation may function as if it were along human group psychology; but that would only be a functional fiction. The form may be the same, but the foundation would be fundamentally different. If and when we humans are able to learn how to codify our primal patterns, then we will be able to educate comphumans as to how we really think in certain situations.
Codification will be somewhat necessary because, except for systemic harmonics, comphumans will not automatically understand how we existentially feel about things as social creatures, and how we may react emotionally to certain social stresses and challenges. Codification is also good for humans to understand others, and ourselves.
Atavisms are ancestral traits appearing today. These throwbacks to earlier forms of behavior manifest themselves in quite odd ways. Since there is no pure separation between primitive, genetically influenced behavior, and systemic defensiveness, it is best to look at all atavisms from within systems analysis.
Cats exhibit atavistic behavior as adults. Anybody who has a cat knows how they love to rub against our legs, and when they are in our laps they like to "make bread" with their clawed paws. Looked at only from the adult cat perspective, such behavior is paradoxical. However, such behavior is appropriate for kittens who need to rub against their mother to induce her to lie on her side, so that they can then "make bread" at her teats for milk.
Humans display similarly primitive behavior with thumb sucking, finger nail biting and other deviant behavior. Kinky sexual preferences and fetishes in all their varieties attest to deeper needs which have not been met in childhood. Phobias too come in many forms, and they nearly always have early links.
When an atavism--which can be likened to a demiurge, or little demon, within the soul--forces a person to behave in strange ways, it is common for that person to feel alienated from the greater society which does not share such behavior. The feeling is mutual from society's perspective. Such alienation is deeper than what Marx and the other dialectical materialists meant when they talked about workers being alienated from their means of production. Fundamental alienation embraces all possible elements of life, only some of which are economic.
I am reminded of a 60 Minutes show during the Vietnam War, on April 23, 1972. Morley Safer, the CBS news correspondent, was flying high in a B-52 bomber. He asked the crew if they were in charge, or if their computerized instruments were really in charge. His question was not absurd, given the stratospheric altitude and ubiquity of instruments. The crew to a man insisted they alone were "in charge." When Safer asked how they decided when to drop their deadly bombs--often at night, or at day over clouds thousands of feet below--the crew innocently responded that they used their computer.
Ever since people learned how to kill at a distance the stakes for war have been raised. Now our enemy can be thought of abstractly as "the enemy," not as a specific soldier with whom one battles man-to-man. Patriotism, another abstraction, can be defended by an alienated army killing great numbers of reified enemy personnel, even killing their civilian population, for whatever abstraction the reified soldier thinks he is fighting for.
What most people think is patriotism is actually nationalism, a parochial tribal consciousness. Patriotism is really doing what is best for your country and for the tribe of humanity. Nationalism is "my country right or wrong." Occasionally there is in practice some overlap between these two ideals, which explains why they both are so often confused. However, the hallmark of nationalism is its ethical blindness. Genuine patriotism is "nationalism with ethical eyes."
People aren't born with a fear of atomic weapons, which are technological abstractions alien to our ancestral consciousness. Evolution has not prepared us to deal with instant death at the hands of a distant and alien enemy while we are healthy. In this age of devastating weaponry the limits of nation state boundaries have become less relevant to survival for the human tribe. If crude nationalism made sense in the past, such consciousness is becoming less functional in light of the evolution of modern warfare. What is needed is global patriotism, which embraces both local nationalism and a global consciousness.
As countries drift toward exotic weapons in the 21st century it would be wise to consider how we primates relate to all weapons of destruction. Without innate fear there is a continuing danger of uncontrolled war--fought by alienated soldiers, for abstract causes, against innocent people who have been reduced to things ("gooks," "slopes," "huns," "rag heads," etc.) hardly worth statistical tabulation. All of this bravado is great when we are winning--but what would happen if we were on the losing side, and the others thought of us as worthless things? What goes around could come around. This is Murphy's Law of Karma.
Mythic alienation is the story of the Garden of Eden. Adam's wife is the product of one of his ribs which was alienated from his body; and then came that episode with the snake that led to their alienation from God and the Garden. In this way the Bible explains mythic alienation.
Alienation is furthermore the curse sages have long sought to avoid by searching for the unity behind all phenomena. Call it God, or call it universal mind, or just call it totality. Whatever it has been called, this unity is supposed to transcend all forms of alienation, since all dualistic phenomena are secondary, derivative shadows of the primary essence.
But I contend this cure can be worse than the disease: It is just as bad to alienate thought from the dualities of life as it is to alienate thought from the overreaching unity. Excessive focusing on either extreme is fraught with conceptual distortions.
The dualists feast on alienated scenarios; and the everything-is-one-thing camp is alienated from the molecular structure of totality's fabric. It is almost as if the first group were to use microscopes only on the fabric of life; and the second group would only step back many yards to view the full fabric. It's trees vs. forests again.
Instead of the either/or problem, we need to use more both/and metaphors. We need to affirm reality wherever and however we find it. We need to know that the whole is contained in the part, just as the part is contained in the whole. We need to know that we as humans are both wholes and part of the total fabric.
Most significantly we need to feel viscerally that we are not essentially aliens to each other, and that all forms of life share a thin skin on what could be a very lonely planet in the vast darkness of space. We are of the Earth, and the Earth is increasingly of us. As conscious guardians of our globular space ship we humans are called to a higher duty than bigotry and mutual mass slaughter.
One of the mysteries of science and philosophy is why one individual would sacrifice his or her personal welfare for another, or for a group. The phenomenon of altruism seems to contradict the mandates of Darwinian evolution and Mendelian gene transfer.
Sociobiologists point to the honey bee's altruistic kamikaze defense, whereby it stings any intruder, but simultaneously loses its life when its stinger is ripped from its abdomen. Sociobiologists say that the individual is actually protecting a close group of relatives with similar genes, so that the group's evolution is ensured at the expense, sometimes, of the individual's. In this way the "fittest" is the fittest group of close relatives, not just the fittest individual who cannot reproduce outside a group context anyway.
In humans the issue is more complex than with honey bees. Yes, we can display bee-like behavior; but there are other manifestations of pure altruism that defy such simple formulas. On the other hand, some human altruism is actually disguised egotism, especially when fame is gained for money spent. Parents are great defenders of their offspring, which is a very bee-like business. At the same time, how can we explain the friend who gives his life for another; and how about the good Samaritan? Is this altruism outside one's family gene pool just another learned behavior, or is there something much deeper at work?
I believe that apparently "irrational" human altruism for strangers is a rich lode of emotion and character that defies simple dimensions. We humans are capable of heroism and love which transcends the demands of society and religions, even logical dimensions. I believe that we have a heroic half that offsets our selfish half. Pure agapaic altruism is love within its own rules that cares not for reward, but rather for the pure welfare of the loved one. Such apparently "irrational" behavior is nectar for saints and fodder for fools.
Is it possible that agapaic altruism is one window to our highest nature, to our reflection of "the image of God"? Is this ethereal altruism the final message of Jesus' mission on Earth?
What are the ethical boundaries for affection and for ethics? At what point do we begin to care for another being? Do we restrict our love only to close human relatives, or to all human beings, or to all life? If we do extend our love, how will that love relationship change as we go further away from our closest human companions? In brief, what is our moral universe?
These sticky questions are properly religious, but they must eventually be answered in the context of our daily lives. Dialectical emergents provide the framework for analysis. Absolutist approaches may seem clean, but they are too often mutually exclusive. What we need is an ethics that will embrace all absolutist approaches as much as possible, and which is compatible with the way people really live. At the same time we must beware the seductive appeal of absolutist ethics which artificially simplify ethics, becoming junk food for the spirit.
Nobody denies loyalty to oneself. Nearly everybody agrees that love for one's parents and children is proper. Almost as many would include siblings and other close relatives. After this point opinions differ. Mafia types divide the world into "family" and "strangers." At the other extreme some have pointed out that plants have "feelings" and a life force. Is it better in the absolute to eat a plant than an animal? Do we have an absolute right to eat anything other than seeds and biological surplus? Do we have a biblical right to eat anything other than humans and pork? There can be no absolute answer to this type of question. It is a matter of taste.
Nevertheless, we can approach the question from our attitude toward our food. When we chomp down on any organic food without the slightest thought of the sacrifice of its source we are profaning the life of that food source. On the other hand, if we pause and honor that food's ultimate sacrifice, then we have restored dignity to its memory, and raised ourselves higher in the hierarchy of consciousness.
What do we owe others in war? Can we as individuals escape the ultimate penalty for acts committed during war, even if we were acting "under orders"? The Nazis tried that argument at Nuremberg, but the court decreed there are standards of conduct which are above and beyond any secular authority. This points us to the Biblical paradox where Christ said to render unto God what was God's, and unto Caesar what was Caesar's--yet he cleverly avoided defining just what was God's and what was Caesar's.
Looking directly at what is selfish, and what is selfless, we note that nothing is entirely "self." All things have a history of emergence from others, so that every subject is initially an object to others before that self acquires its own identity through consciousness. Even a human is a nameless "it" to others in the earliest stages of gestation. All subjects are initially objects; but only some objects become subjects. Biological creation is a dialectical emergence whereby one appears from two to make three. Creation also implies negation of the antecedent; otherwise, a vast population would crowd the finite surface of our planet.
The need to overcome entropy is so great that each species endangers the fortunes of its individuals for the cause of procreation and nurturing. After the new generation is spawned Nature cruelly ages the parents to make room for the children and their fresh genetic progeny. Cultures themselves exist in part to support this process with minimal stress to the individuals.
Given all the above, is there room for enlightened selfishness? If there were no selfishness in the best sense, there would be no individual excellence, no individuality at all other than genetic variability. Society has patterns that need to be followed to replicate each generation. Within this apparently mechanistic dynamism there indeed is a major role for individuality: Individuality functions inside society as mutations function within gene pools, providing the spark for a culture's evolution. But at what point is individuality too much of a good thing?
As society becomes more dangerous to itself because of the hubris of technological advances in the art of mutual annihilation, adventurism by those who have their fingers on "the button" can be suicidal for our species. The only cure for this danger is for society to sponsor the highest ethic--merging selfishness with selflessness.
Selflessness is both a psychosocial and philosophical concept. Philosophically, the Buddhists seek sublime selflessness through the journey to enlightened nirvana. Other religions and religious practices seek likewise to have us separate from everyday consciousness and body obsessions, to focus on the divine. This process is good as far as it goes, but it doesn't go far enough.
Psychosocial selflessness is the domain of saints. It also is the domain of ordinary people who see beyond their immediate selves to the greater social self of which they are an organic part. Ironically, it is only when one emerges (through exstasy) from the cloud of mass consciousness manifested by many forms of irrationality that one is able to see the mass consciousness itself. When we can at last view what was invisible within our socially defined world view we are then able to see where its leanings are leading. If we see a drift toward mutual annihilation, or just regional annihilation, we are bound to express our highest selflessness through selfish acts that separate us from and challenge the blind social ethic.
Personal ethics are not separate from life and religion. Our personal ethical perspective is the rudder we use in the stormy seas of life. Most people "borrow" somebody else's rudder in the form of a religious package of belief. A very few design and implement their own rudder. It is up to philosophy and theology to help us decide where to steer our boat with whatever ethical rudder we have chosen.
The next chapter deals with the tapestry of religion as a psychosocial phenomenon. My interest is in discovering how and why we allow religion to tell us what to think and believe. I am looking in the next chapter for the life-affirming and life-enhancing aspects of religion, to find how they can be liberated from the life-diminishing aspects of organized belief systems.
It is only from this higher perspective, building upon the current analysis of how we really think, that we will be able to appreciate the Theology of Hope.
"Never wage war on religion, nor upon seemingly holy institutions, for this thing has too great a force upon the minds of fools." -- Francesco Guicciardini, Ricordi Politici.
Religions are social engines that codify, rationalize, and enhance previous folk traditions. There are no formal religions without roots in social organizations, and certainly none without roots in human psychology, which is itself rooted in brain physiology. Religions cannot be reduced to brain physiology. Still, we must accept that our concepts of divinity are not independent of our human ways of thinking and feeling.
Even the emergence of monotheism is a logical outcome of the desire for simplicity in our lives. Polytheistic and animistic gods serve to appease local needs, but they cannot exercise supreme power (by definition), since each god or goddess only has a portion of the total power of divinity. It is logically elegant to assume that all power should be controlled by one high God, and that this supreme God allocates power and favors according to inscrutable divine wisdom. In light of the advantages of monotheism over polytheism, it is not surprising that monotheism developed, but rather that it took so long for the one-God thesis to dominate society.
The emergence of monotheistic Islam in the seventh century is a case in point. The various Arab tribes all worshipped local deities. There was already a tradition of travel to local towns for festivals devoted to each local god. This pattern was good for local business, and it also provided a pleasant excuse for people to come together for social purposes.
When Muhammad transformed the Jewish and Christian tradition of one omnipotent God into his prophecy he was just putting into the Arabic language what was already basically revealed to the "people of the book." It was only due to political tensions among various groups that Muhammad was invited to Medina to mediate their problems. Leaving Mecca for the new base was the decisive break with his roots. It enabled him to begin the conquest of Arabia for Islam.
Thereafter tolerant local cultures were wiped out in favor of one unifying, but intolerant, view of the world. The staying power of such an authoritarian-submissive ideology is painfully evident in Middle East politics today, and will continue well into the 21st century--as geography, ethnicity, petroleum wealth, rival religions and other forces feed into the authoritarian-submissive framework erected by Muhammad in the seventh century.
Islam does not have a monopoly on authoritarian-submissive ideology. Even though much of the non-Islamic world distrusts the Muslims for their history of conquest, it should also be noted that the Israelites conquered land after they fled Egypt under Moses' leadership. They also claimed God as their first conscript. Crusading Christians too were not averse to conquest in the name of divine intolerance. Indeed, looking at the total history of each religion battling over those desert lands, it is not hard to see how tolerance has been pushed aside by power-hungry princes who have learned how to motivate their masses with authoritarian-submissive religion.
We might call this "Gresham's Law of Religion." The original Gresham's Law said: "Bad money drives out good money." Our new "Gresham's Law" says: "Bad religion drives out good religion."
It is one thing to conquer territory. It is another thing to conquer hearts. Nevertheless, physical conquest often leads to psychological conquest, since the core of religion is closely allied with the core motive of daily existence--survival. The most successful empires have generally been those that have allied religion and power politics.
One of two strategies has been followed: The first strategy is the Roman model, where the conqueror displays tolerance for the local religions as long as the local religious leaders genuflect to their conquerors. The second strategy is the early Islamic model, where whole populations are converted, or else. Neither model has generally been pure, however. Muslims have generally been tolerant of "people of the book"--Jews, Zoroastrians, and Christians--as long as the partially enlightened subjects displayed secular loyalty to their fully enlightened conquerors.
Another post-conquest pattern sometimes reveals itself. That is when the militarily conquered become the cultural conqueror. China traditionally has absorbed the northern barbarians who have invaded her territory. India is another cultural sponge which has maintained its robust religions through acculturation of conquerors, even when they were Islamic rulers who modified their practices to harmonize with Indian styles. Rome's conquest of Greece led to the strong influence of Grecian culture on Rome, to the point where many Greek gods were embraced, but renamed, by the Romans.
Once a new religion is established it is imperative that the encroaching religion capture and hold the allegiance of each individual convert. The required allegiance goes beyond the outward signs of obedience to power. It must enter the heart of each religious participant.
If a religion cannot continually win the hearts of the majority, that religion will eventually be absorbed, revised, or simply overthrown. This is especially true of secular religions such as Marxism-Leninism, because the truth value of secular religions can be judged within this life. Competition is also evident in Africa, where the militarily superior white Europeans injected Christianity into native cultures, only to see their Christian ideology transformed by native African consciousness.
All successful religions are functional entities integrated with society. They function as part of the social glue that maintains the established order. The best example of this adhesive power would be the role of the medieval Christian church in Europe. The stated role of the church was to harvest souls for Heaven. Nevertheless, the church also was a temporal power with elaborate institutions that needed money and protection from secular competition. Thus, bishops and popes were careful not to alienate powerful princes. They promoted the concept of reward for virtuous obedience coming in the afterlife; and they promoted the concept of salvation by works, which helped support the Church's coffers. For a while indulgences were sold, so that one could buy forgiveness from sinful acts.
In this way poverty was ennobled, while the uneducated masses rendered unto Caesar that which was Caesar's. Kings ruled their secular lands by divine right, and the Pope ruled over all lands by the Petrine Doctrine. Everyday life in medieval Europe appeared structural-functional.
Martin Luther and other radicals sought to overturn the monopolistic despotism of the Roman Church when they proclaimed that faith could also be a personal thing, and that good Christians could commune directly with their creator both through reading the Bible and by prayer. This direct link to God challenged the power of the religious bureaucracy, helping to inspire the Thirty Years War and a theological rift which has never healed in Christendom. We should also note that Luther was backed by his own Germanic princes who wanted to expand their secular power at the expense of other princes who were backed by the Roman Church.
Despite the reformers' successful challenge to Papal hegemony, direct-prayer theology itself created a major potential problem: A logical outcome of direct communication with the ultimate power source is the eventual atomization of all types of organized religion, where each believer is effectively a religion unto himself or herself, and all clerical hierarchies are bypassed as irrelevant to the I-Thou dialogue with God.
The new Protestant reformers cleverly overcame this danger to themselves when they emphasized that people still have a duty to pray together, to participate in congregational activities, to cooperate on missionary projects, and to develop a community of believers. This strategy was well received, since human beings are social creatures who seek group approval for their beliefs.
Each religion is a package. It is a package of traditional beliefs and practices dressed up as absolute wisdom. Each package appeals to the basic human desire for security. This appeal is directed to the need to know the future, to justify the past, and to understand the present. Each believer is given the keys to certain knowledge, as codified by holy books, rituals, and traditions. Such a package of absolute revelation abbreviates what would otherwise be a long and puzzling search on the part of individuals for truth.
When truth is doled out by infallible texts no other questions about ultimate reality need be asked. Such certainty frees the emotional mind for other, more mundane tasks, such as earning money for tithing.
Religions in the modern world have sometimes slipped away from their success formula. Recently, certain established Protestant churches have relativized their truths. That retreat from doctrinal certainty has alienated many of their old-fashioned adherents--even leading to schisms within some denominations such as the Protestant Episcopal Church, where some congregations have returned to the Anglican liturgy.
When we ponder the recent phenomenal success of certain television evangelists, we should look at them in light of the basic human needs they are addressing. You will never see anything short of absolute certainty in those slick shows. And there is another, special element in their messages: Salvation is easy and instant, if only one is "born again." Floods of money came to Jim and Tammy, as thousands bought their teary promises hook-line-and-sinker.
Those who were duped by Jim and Tammy cannot be blamed--at least their emotional brains cannot be blamed--since it is "only human" to want easy access to a warm and fuzzy future. Who would emotionally turn his back on a smiling huckster if he had the keys to Heaven? It might at first be said that such followers are exercising Pascal's Wager. This is not so, because one who wagers still retains doubt even within commitment. True believers no longer doubt. Their pure belief is purchased at the price of lost authenticity.
Hucksterism is not unique to America, even though the power of television has amplified the presence of individual hucksters in the West. People everywhere are susceptible to instant nirvana pitches. The Pure Land Buddhists in Japan say that nirvana awaits anyone who simply appeals by name to the Amida Buddha, a bodhisattva who allegedly can bring the spiritual body to an eternal heaven. It is not surprising that a majority of all Japanese Buddhists are Pure Land followers. It is much more difficult and time consuming to get a grasp on the ultimate reality if one is a Zen Buddhist--and the Japanese value both time and comfortable precision. People everywhere love fast food and fast religion.
On the other hand, psychological security can also be found in rigorous religious rituals. This is the opposite of the "fast faith" approach to salvation. Suffering is "proof" that faith is justified. Job was the early model for this type of masochistic religion, and there have been others walking in his footsteps. In the medieval era the flagellants inflicted pain for faith. Witches were burned to save their souls, but not their wicked bodies. The Muslim faith is quite demanding with its sets of rules for fasting, daily prayers, the hajj to Mecca, and so forth. Many religions stress severe, exclusionary dress and behavior codes--the Amish, the Hutterites and the Hasidic Jews being just three prominent examples of this in-group vs. out-group behavior. People everywhere love to gain through pain to prove and justify their choice of faith.
The emotional brain likes shortcuts to answers. Survival itself is a stimulus-response loop, the briefer the better. The cerebellum is responsible for controlling physical activity. After one learns a movement from trial and error, the cerebellum is able to carry out a complex movement without hesitation. Such a timesaving pattern has survival value in the face of physical danger from instant threats, but not necessarily against complex and delayed threats.
Nearly everything we do is formed by patterns we have learned in childhood. Language is a prime example. We unconsciously speak the complex grammar and detailed vocabulary of our native tongue. There is no re-invention of the wheel for individuals. Not only do individuals carry definite patterns of language within their brains--groups of individuals carry the same harmonics, which enables them to converse with speed and accuracy, which itself has survival value.
It is not by statistical accident that your religion is most likely that of your parents and their culture. In Thailand and Burma one "naturally" becomes a Buddhist, and in Saudi Arabia one "naturally" becomes a Sunni Muslim. As the twig is bent, so it grows.
DNA itself is the repository of patterns and conservative replication. I am not arguing that religions are ultimately based on DNA. Rather, I am pointing out that our body's ultimate building blocks themselves are conservative and patterned. DNA is, literally, structural-functional.
Religions are at their institutional best when they are structural-functional and virtually unchallenged from within. Heresy has always been a much greater crime than unbelief.
Logically, there is no automatic theological superiority of orthodoxy over heterodoxy and heresy. It is a matter of relative perception. From the viewpoint of the orthodox believer a deviant is by definition heretical. From the viewpoint of the "heretic" the orthodox adherent's belief is merely heterodoxical to his own. Wherever orthodoxy is enforced it is backed by secular forces, not really by the force of logic. The Spanish Inquisition was the best example of such political theocracy.
Things get strange when different "pure" theologies manifest themselves as intolerant religions seeking to occupy the same religious turf. The world has suffered "truth-vs.-truth" battles for many centuries. These battles have been both bloodless and bloody. Even today we see institutional forces arrayed against each other in various regions of the world, as where the Shi'ite Muslims compete for prominence against the Sunni Muslims. Then there are the other Muslim derivations, such as the Sufis and the Baha'is.
There is no stopping such schism, as more "revelations" can always appear, not just within Islam, but also within any other orthodox religious tradition. Orthodoxy is at best a Maginot Line against zealous religious sentiments.
If the "social body" were as conservative as the genetic body, then life in the land of religion would be harmonious. But religions today must compete inside a rapidly changing technological world. Ideologies that were functional even recently are now in danger of becoming dysfunctional in light of new conditions. When the social body changes much more rapidly than the genetic body dialectical opportunities arise for new forms of homeostasis, either in religion or in wisdom. Change brings danger; but it also brings opportunity.
If religion were everything to everybody, then it would mean nothing to anybody. Religion is a social institution that must be split according to ethnicity and according to history for it to be comprehensible to the many different cultures on the surface of this planet.
If a God were everything to all beings, that God would have no discernible identity. That is why we have gods with "personality"--jealous gods, quarreling gods, vain gods, forgiving gods, and so forth. All of these gods (be they polytheistic or monotheistic) display within religion an amazing set of anthropomorphic personality traits.
We project into our image of God fantasies of paternity and maternity, as well as many other human social relationships. In this way we are able to "relate" to the mysterious as if that mysterious entity were just a magnification of our everyday life.
When we have one God to focus on, that simplifies our task. It also magnifies the risk we take. If we err in our relationship to that one God we are at risk of eternal damnation. On the other hand, if there are many gods sharing in the total power of divinity, then we could ally god against god for our own benefit, using magical rituals and other communication channels. This is the charm and safety factor of polytheism, and it helps explain why ancient Hinduism has never been superseded by Islam or Christianity.
Theologically, all of the above packages appear secondary and absurd when faced with the twin primary tasks of being able to simply know if there is a God at all and, secondly, how to define that god essence in terms accessible to our emotional and intellectual brains. Distracting individuals from the implications of these deep problems has been one of the major jobs for organized religion, since probing thought corrodes religious dogma.
Because individuals live within a consciousness of history, it is important to have a historical concept of God. History is one thing--the need for history is quite another phenomenon. History is by itself just the documentation of the past. It shows us where we have traveled, but alone it does not show us where we are going. In contrast, we humans emotionally need to know not only where we have been, but also where we are now in relation to where we think we are going, and even to where we want to go.
One of the most powerful aspects of organized religion is its orchestrated sense of roots. Just as a tree has roots, cultures also have roots. In that sense, cultures are more like trees than birds. Humans individually dream of flying, but in fact they are always seeking roots to hold out against the winds of fate.
When Semitic Jews and Semitic Muslims hurl rocks and bullets against each other in Jerusalem they are fighting not just for turf, but also for their root identities as historical peoples. Newton remarked that no two physical bodies can occupy the same place at the same time. His statement could in essence include religious bodies fighting for "space" in the hearts of mankind.
Animals don't feel the need for history. They live in the here-and-now quite happily. Yes, they do have a genetic history, but their genetic history serves the present; it does not impede the present. Animals don't have consciousness of their future, either. Humans alone feel the need for a past-present-future identity. Their religious communities define a teleological flow of history that fills the need for roots and destiny.
It is interesting to compare the consciousness of animals and machines. Both perform in the here-and-now, without a sense of history or future. Only humans have that rich vertical dimension behind every contemplative thought. Still, many philosophers, especially those in the East, praise the here-and-now consciousness, because such is a form of authentic relationship with reality. To be here now is to be in touch with the real world, not just the world of our fantasies.
Comphumans will be both machinelike and more human, since they will have the computer's ability to firmly focus on the here-and-now, but also appreciate a society's sense of time.
One way to look at religion is to see it as a play between the forces of stability and instability. This is another way of seeing things in terms of negentropy vs. feared entropy, of order vs. disorder which leads to chaos. Religion poses as the negentropic force of homeostasis, as opposed to the entropic, heterostatic forces of doubt and unbelief. Is it any wonder that religion resonates so well with our deepest needs?
Peter Berger, the sociologist and former teacher of mine, in his The Sacred Canopy(1966) suggested that religion is central to our world-construction. He sees the sacred cosmos opposed to chaos: "The sacred cosmos, which transcends and includes man in its ordering of reality, thus provides man's ultimate shield against the terror of anomy. To be in a 'right' relationship with the sacred cosmos is to be protected against the nightmare threats of chaos." From Berger's perspective it thus would appear that the search for ultimate truth takes a back seat to fear of chaos. Anyone who would even question the world constructed by specific cosmogonic myths becomes a potential ally of chaos.
Just as the ocean is a permanent presence for one who lives at its shores, so too the idea of a permanent, caring God is a solace for the believer who feels that both his present life and his afterlife are under the benevolent direction of an unchanging divinity.
It is common knowledge that periods of personal crisis leave individuals most vulnerable to conversion and "rebirth." In contrast, when people and societies are comfortably into a groove they seldom ask fundamental questions about the meaning of life, if those questions would threaten the happy curve. Nor are they at that time particularly receptive to those who would offer answers different from the religious package they already have bought.
Messianism is received most readily by whole peoples who now doubt the spiritual safety nets their old cultures have provided. When doubt about one's culture--and especially about the economic and political base for the old culture--is strong enough, there is an opportunity for something new to challenge the old social fabric. A new or renewed homeostasis is automatically sought in the form of a new ideology.
Africa displays the classic pattern of alien Western Christianity replacing discredited local deities--while at the same time those deities renew their energy as they are partially incorporated into an Africanized Christianity. This is a good example where all sides "win."
A similar pattern emerged in the early decades of the Christian Church in Rome: The early fathers moved Christmas from a non-celebrated time in spring to December 25, which just happened to be the cultic birthday of Mithras, the imported Persian sun god. It is no accident that Mithras was targeted, since in 274 A.D. Emperor Aurelian proclaimed this pagan cult a favored religion. In the early 300s the cult ran neck-and-neck in popularity with Christianity, with the tie broken only after the Roman emperor Constantine was baptized.
The presence of a caring God is, theologically, just an hypothesis. In religion, however, the felt presence of a caring God is not an hypothesis, but a visceral fact of belief. It is one of many grounding beliefs. Any remaining doubts only add to instability, poisoning the peace of mind purchased by the sacrifice of intellectual purity. Because the emotional mind rules the intellectual mind, those who embrace religion have no qualms about what they have done to truth.
The psychoanalyst, Wilhelm Reich, spoke of muscle armor which leads to character armor. Rigidity in any part of our bodies and thoughts leads to a chain reaction wherein our potentials are blocked by the hard resistance of rigid character. In contrast, flexibility is similar to the reed in the wind. Just as an oak tree is very strong when the winds are calm, but can break in a major hurricane--the reed is very weak in the calm, but invincible in a hurricane.
Character armor and rigid religions are like oak trees. They hate the winds of critical thought, and will to their utmost ability do what is necessary to quell individuals who "blow too hard." This is why the institutional Christian church so enthusiastically burned heretics at the stake. Such action makes good sense from a systems perspective, even if it doesn't from the perspective of theological honesty. Let us not forget that our churches are financed on Earth, not in Heaven.
In daily religious life it is sometimes difficult to avoid the questions skeptics always raise. These pesky questions take many forms, such as about the various translations of the Bible, its authorship or authorships, and even about the ultimate problem: Who or what created God? Religions defend against this intellectual cancer by denial wherever and whenever possible. However, there are other ways to treat the systemic threat of questions-without-neat-answers:
Religions can erect a rigid set of defenses that go beyond denial. Believing the best defense is a good offense, adherents are challenged to believe even in the face of doubt. Belief becomes not only a test of faith, it is a badge identifying the believer as among the faithful who have shut their minds to the noise of questions-without-neat-answers. The Old English word for "by God" became today's bigot. Religious bigots are intolerant of other religions. Such intolerance is supported by the delusion of certainty which, of course, is equal to one's particular dogma. Such intolerance would merely be quaint if religion were not a social phenomenon intertwined with power politics.
Individuals who "know it all" are potentially extremely dangerous. Such individuals sometimes claim to have direct access to and guidance from God, which places them totally beyond reason. This is a perverse but logical extension of Martin Luther's revolution which did away with clerical authority. Such people feel they are forever sinless, no matter how many sins they may commit. It may not be entirely by accident that the land of Martin Luther became the home of Hitler's Third Reich. (There's a joke in Texas that says when you encounter a boasting born again Christian businessman, grab your wallet and run for your life.)
Certain religions can encourage religious bigotry. Any religion that is based on the claim of access to perfect, revealed truth (the "word of God") can be a vehicle for those who seek power beyond sanctity. Such people as leaders can be quite charismatic, since human nature avoids ambiguity and is attracted to clarity. Clarity provides cover for bigotry.
Nevertheless, questions of transcendence must forever be ambivalent at their theological core, since humans can never grasp onto all the essential facts with absolute certainty. This is how the bigots counter philosophy: They do an end run around honest logic, substituting their "revelation" for cogitation. It appears as the theological equivalent of medicine's "magic bullet," but it functions more like a theological lobotomy.
The price to be paid for such rigidity is loss of intellectual honesty. If the conclusion is known even before the question is asked, then why bother asking any more questions? That is the goal of dogmatists: stopping any more questions. However, the cessation of questioning in the midst of a rapidly evolving society cuts us off from new experiences, new realities, new facts. The future becomes the past in our minds, because there is no future in experiences.
In contrast, honest flexibility is not rootlessness. It is an openness which can embrace the new without losing the best of the old. In the future rigid religions will, for millions of people, be superseded by flexible religions. These modern religions will have a strong honesty, inspired by simple comphuman theology.
Psychologists speak of the addictive personality. People can become addicted to almost anything that satisfies primitive cravings. Religion's appeal to security, through removal of doubt, is indeed an "opiate" for religious addicts. At least on this point Marx was right.
There is nothing psychologically wrong with being very devoted to an idea of God, and to transcendent ethics. After all, nobody can logically disprove the existence of God. On the other hand, there is a lot wrong with becoming obsessed with a very narrow, often punitive, concept of God. It is when we transcend basic belief to become a "true believer," to use Eric Hoffer's term, that we slip into the world of addictive delusion.
Fundamentalists have long been troubled by the number of different Christian churches in America alone, now about 700 different shades of the same theme. This splintering of one historical tradition allows for much competition for souls, so that each competitor must try harder to sell the prospect. Prospects themselves look for a church community, but they also look for the golden keys to Heaven.
It is one thing to get people into a church. It is another thing to keep them in the fold so that they will not listen to the competition. Members in some churches are led to believe that people outside their particular church are in jeopardy of going to Hell. Such members are controlled by the chains of fear and guilt. Some churches expect multiple attendance each week at different church services, in addition to various church activities. This routine installs a very busy schedule that keeps hands, hearts, and minds from mischief.
Churches can become functional families, at least on a superficial level where one can socialize in one's best clothes with like minded "family" members. We present our best selves at church, which reinforces our feeling of moral superiority, and which can lead to moral abuses in the real world against those not inside our holy congregation. In Texas I learned that when a slick businessman trumpets his born-again credentials it's time to watch your wallet.
Church picnics allow us to indulge in food, which is the only sin or excess not condemned by puritanical sects. Sexuality is perceived as a great threat to church life, since the procreative urge is nearly equal in power to the search for security, which religion tries to appropriate. Sexuality is not something the church can control, so it opposes the challenger in many subtle ways, including support of other basic needs that could distract people from lust. Among the major religions, only Tantric Yoga incorporates sexuality into its core, and only then in a ritualistic way.
The addictive personality is at root an obsessive-compulsive personality. This type of person is like a glass of water with a hole at the bottom: No matter how much love is poured into the top, the bottom leaks and more love is required. The "born again" addict imagines he or she has a "new glass," when in fact the addict only has new liquid in the old glass of his defective emotional shell. Religion never patches the real hole, but it can keep enough "water" in the glass to where the religious addict imagines the hole is gone. However, if the inflow stops the addict can be worse off than before, suffering from intensified disillusionment.
Addiction is a program script of the lower brain, and it can be understood in systems theory terms. The lower brain is only interested in survival and procreation. If something went wrong in childhood the lower brain will spend the rest of its life looking for that missing key to security, often at the expense of the higher brain. To heal this pool of primal pain it is necessary for our social and spiritual lives to drain the pain by addressing the language and sources within that lower brain.
A common theme among many religions is that of sacrificing to achieve witness for one's faith. This is a theme as old as the story of Cain and Abel, or the real practice of the Aztecs who cut out the hearts of young men. Today's literal sacrifices are highlighted by sacrifices of rams by Jews and Muslims to celebrate the total submission of Abraham to his one God.
In sports training the slogan is "no pain, no gain." Something similar could be said of religions which demand at least some change in lifestyle. Usually much more is demanded, such as tithing ten percent of all one's wealth. Many religions require a change in personal appearance to help advertise one's faith. In general, the more conservative the attire, the more reactionary the belief. Thus we see the Hasidic Jews, the Amish, some Hindu sects and others in America appearing and acting at odds with the modern norms. Such devotion is neither good nor bad in itself, but it does alienate the believer from the majority culture.
Alienation from the majority culture helps bond members to their socially deviant group. This in-group vs. out-group phenomenon helps insure ideological purity of the deviant sect. Here, the "elect" minority feels morally and theologically superior to the majority. Our time on Earth is just a way station on the journey to the eternal afterlife, so it is felt, so why bother assimilating with those in error?
Whereas social cohesion is functional within the self-perceived in-group's context, that group is viewed as an out-group by the majority society. There is the additional danger that acting on one's socially deviant beliefs could lead to active persecution from the majority culture.
True fanatics welcome persecution--after all, that's how the Christians won over the cynical Romans, feeding themselves to lions in the Coliseum. In the modern world cult leaders feed on the human potential for paranoia. Jim Jones and David Koresh were only the worst of their lot. Religions seem to prosper because of periods of persecution. Whereas the Soviet Union was able to intimidate the rather submissive Russian Orthodox Church clergy, today's Russian churches are filling with young Russians who are eager to rediscover their cultural heritage. Even though the church appeared to retreat during the long Stalinistic repression, it was only retrenching, since it always was the Russian Orthodox Church, not an alien import. As such it became a tangible symbol for the intangible concept of narod, the people.
During periods of social crisis and uncertainty the perceived stability of the cultural church is a magnet for great numbers of people. Authorities can capture church buildings with tanks, but not church members' hearts. Even though chaos or terror has followed the Roman church (the Crusades, the Inquisition), most Americans today associate the Roman Catholic church with peace and order.
In the Middle East and in such places as Northern Ireland and Bosnia religion has helped define and aggravate tensions among various ethnic and political groups. Each area has large numbers of people latching onto one or another religion and its temporal infrastructure, to help buttress and justify temporal struggles. Sadly, religions of peace have become pawns in war. In a shooting war God is the first conscript for both sides.
On a basic level we can say that part of the role of institutionalized religion within society is ideally to provide order when there is disorder; to provide comfort in crisis; to provide peace in time of war; and to provide the last rites for the dying as well as all other spiritual gate-keeping duties. Clerics help us cast aside doubts and get going with our everyday activities. The survival value of such absence of doubt is great; but the danger remains that we can also be focused in the wrong direction.
There are cultural religions, and there are culturally imperialistic religions. Imperialistic religions are authoritarian-submissive, and are always allegedly hooked into God's divine moral order. All imperialistic actions are thus justified as being directed by God, so that anything goes that adds to God's greater glory. Both individuals and entire societies can be called to sacrifice for the divine plan. Invariably, imperialistic religions merge divine power with secular powers, either formally or informally, as during America's "Manifest Destiny."
Soldiers for such imperialistic religions can be a problem for the architects of conquest. Soldiers are human beings, much to the disgust of their commanders who would rather lead brainless robots. Clerics are brought in to pour salve over scruples and fears, and (especially in Islam) decree that a dying soldier will be welcome in Heaven. So-called "holy wars" usually feature two sides, each claiming divine guidance, and each side telling its soldiers that their martyrdom will ensure them a place in Heaven. How disgusting and dishonest!
In times of war there is a sort of Gresham's Law of Religion: "Bad religions drive out good religions." In times of crisis and doubt there is no room for doubt, only clearly directed action. In war the first casualty is truth. There is no time in war for love and tolerance. When the smell of blood is in the air who really cares for the Prince of Peace?
All religions need a "dark side" to provide the antagonist and a value contrast to their advertised "light side" of salvation. In western tradition Satan has fulfilled that role. In Hebrew, the word, satan, means "accuser." In the Old Testament Satan is the agent of God who accuses, or tests, the righteous--but not the agent for evil. The parable of Job well illustrates this role for God's fallen angel. It was only during the centuries preceding the Christian era that Jewish extra-canonical literature depicts Satan as directing his subordinate fallen angels against God. As depicted in the Dead Sea Scrolls, Satan is created by God to lead his forces against God's forces, so that the good forces can prevail and thus "end history" through the establishment of God's kingdom.
It is fairly easy to view Satan as a major player in a drama written and directed by God. Satan plays the role of the villain so that there will be something against which the good forces can struggle and inevitably triumph. Without an evil antagonist there can be no contrasting good protagonist--just ongoing existence, which is emotionally boring.
It never occurs to the Satan-obsessed that "existence" might be all there is outside our own world of values, so that whatever evil there is may be entirely of our own creation.
Satan is supposed to have free will and not be a puppet. However, the Koran implies that Satan is ultimately God's servant, even in Hell, and may in the end be redeemed. It is hard to reconcile independent evil with service to the good! This is by its own logic an absurd dualism. Either Satan is independently evil, or he is just the messenger and surrogate for another force, which may in the end be beyond good and evil.
Satan is just the most prominent example of demonology. Demons have been around for as long as people have tried to explain the unexplainable, especially the presence of otherwise unexplainable troubles in what "should" be a perfect world.
It is almost impossible for people to imagine that a divine creator would purposely include evil forces inside his exquisite order. This problem has been fairly well contained in Indian and Chinese cosmology where the gods have been multiple, with interpenetrating elements. Shiva, for example, both creates and destroys, transcending good and evil. Popular, traditional religions simply resort to a host of demigods, many of which we could call demons; none of which is totally responsible for the big show.
Demons are also summoned to explain deviations in secular society. Mankind is supposedly improving, due to advances in the physical and social sciences. However, wars and other massive evils persist. Classical liberal theory is reluctant to admit the chaotic into its formulas; so an external force is posited. These demons are not called demons in today's vocabulary. Rather, they are identified as alien ideologies, disease pathogens, Murphy's Law, and a few other ways to shift blame from oneself to something external and alien.
Whereas demons formerly were described to explain our universe's rough spots, now they operate as convenient excuses for our own inefficiencies, and for our own demiurges. Demiurges are forces and feelings within us that emerge when the existential world works in ways threatening to our genetic heritage. When data from the cortex conflicts with acceptable parameters inside our emotional brain we experience anxiety which leads to demiurges, felt and unfelt. When demiurges lead to actions and reactions which are counterproductive and costly we often resort to external demonology.
Religion is primarily a practice of the heart, not of the head. We need that emotional dimension supplied by a personal deity. Thus, we imagine that God has a human-like face, ideally for European cultures as Michelangelo painted Him on the Sistine Chapel. It is comforting for humans to imagine a golden, medieval city in the sky. We need a holy place where we may go as spirit bodies in the afterlife. This is why we locate God and Heaven in the sky.
The sky is as close as our breath and as remote as infinity. Strange things populate the skies: meteors, powerful storms, the northern lights, tornados, rainbows, and the starry heavens. In contrast, Earth is too well known to us and, we can walk or sail all over the Earth, removing its mysteries. God needs to reside somewhere else; removed, yet immanent.
The Tower of Babel reached into the skies, but God twisted the tongues of its builders, the Bible says. Today's religions busy themselves with church steeples and mosque minarets, not towers of Babel, as they literally reach for the sky.
Of course, God doesn't have to "reside" anywhere at all. It is we who need a direction for spiritual travel. Our soul's final journey must not be an aimless wandering toward no specific place. That would be equal to death, or at least limbo. To leave and never arrive would be the spiritual equivalent of entropy. God is safely omnipresent, but humans need directions to a new nest to avoid dissolution; and the heavens are the best place for Heaven, at least in our religious imaginations.
We need to go back to Adam and Eve to find the "reason" for Hell. Until the first couple had knowledge of good and evil they were incapable of committing evil, so none of the human species could populate Hell for cause. The Devil might have been lonely, which may be why he sent his alter ego to The Garden of Eden for that fateful encounter with Eve. The ancient Greeks had more than Heaven and Hell, which they called Elysium and Hades. They also had a nothingness form of afterlife they called Limbo. It could be argued that the metaphorical Garden of Eden was something of a moral limbo, or at least a moral nursery. Until they found wisdom the naked duo were happy robots for God. The infamous apple episode marked their evolution into independent consciousness, which brought with it the responsibility for moral choices. Because Adam and Eve became wise from their rebellious action they thereby became more godlike.
Just as it takes knowledge of Heaven to know Hell, it also takes knowledge of Hell to appreciate Heaven. We could be in Hell and not know it, if we have limited consciousness. Liberation from such a Hell can only come about by an elevated consciousness.
Of course, there is always being "born again"--but that path is a no-brain external solution from grace. I suggest that the highest awareness of Heaven on this planet may be found by each person through the internal wisdom of honest inquiry into the depths of one's own life and soul.
A final irony: An eternity in Hell is better for the soul than total annihilation at death. Obviously, nobody wants to sit on glowing coals for eternity--but at least it is an eternity with continued personal existence, rather than the chaos of final death. The strange idea of Hell-as-comfort can only be understood by an appreciation of the finality of entropic dissolution at death. Our primal fear of annihilation leads us to perversely embrace the idea of Hell as a comforting aspect of life everlasting.
There can be a close relationship between faith and miracles. A dictionary would define miracles as events in the physical world that appear to contravene all known laws of the universe, and which are therefore usually thought to be of supernatural origin. Thus, miracles can be a channel of communication between God and mankind.
Miracles are sometimes confused with magic. Magic is the attempt to manipulate the supernatural for human benefit. Miracles, in contrast, often appear spontaneously, sometimes even in our dreams. Saul's conversion to Paul was a miracle; so was Allah's choice of Muhammad to be his messenger. It's even better than winning the lottery, because you don't even have to buy a ticket for God to choose you as his messenger. The Roman Catholic Church supports this connection between faith and miracles. A saint must first have miracles attributed to his or her name. In the Islamic faith the Koran is honored as God's final miracle. Nevertheless, many of the Islamic faithful wear amulets, invoke charms, and attend the commemorative anniversary of one or more local saints who are reputed to have worked miracles through Allah. Organized religions thus have a love-hate relationship with events that border on the magical. They welcome miracles as physical evidence of the spiritual world, but they constantly attempt to distance themselves from sorcerers and other tricksters.
An excellent example of this spontaneous phenomenon is the frequent appearance of Catholic apparitions among the Catholic faithful. Instant shrines can arise when a beam of light or spot of dirt in an otherwise ordinary place is perceived as the image of the Virgin Mary. Thousands of pilgrims descend on the site, turning it into a makeshift shrine and profit center for the local merchants.
Roman Catholic officials acknowledge that there has been an upsurge in recent years in reports of mystical apparitions around the world. More than 200 such events have been reported in this century, and many in the field point to such apparitions as evidence of unmet spiritual needs. When such an event occurs Catholic bishops appoint a commission to investigate. So far, no American apparition has been shown to be beyond conventional explanation. However, there are fourteen apparitions worldwide that have passed the screen in the last 160 years.
Approval does not mean that Catholics are encouraged to believe in the events, merely they are not forbidden to believe. In this way the Church keeps its dogmatic purity, while allowing for spontaneous Virgin Mary sightings. Skeptics in the Church question whether the Church should be encouraging such evidence of mass hysteria.
Theologically the jury is out, while crowds cram makeshift shrines. Such is the hunger in today's world for direct spiritual connection with the transcendent powers of the universe.
There is a vaccine for such innocent hysteria. This antidote is a trained awe. When we look at any event we are looking at direct evidence of the creative power of the universe. Seen from this perspective, everything is a miracle. Anything--the blossoming of a flower, the passing of a thunderstorm, even the breath of a loved one--can be perceived and felt as a miracle.
I believe all existence should be embraced as miraculous. Miracles themselves are only a small part of the miraculous. Since we cannot know the order of the universe, we don't really know what is the ultimate cause and what is effect. We cannot say with absolute certainty what is primary and what is secondary. All we can say is that the world presents itself to us in all its glory and mystery.
Our only honest response is simply to rejoice in the suchness of existence. Plato said awe is the beginning of philosophy. When we move from arrogance to awe, we move from bigotry to wisdom and agapaic love. In the end, we are challenged to move out of our conditioned animal box into the free world of honest thought, and thereby join the creative being we call God.
Just as awe is the beginning of philosophy, awe is also the end of philosophy. Through our journey in life we may be lucky enough to end up where we start -- with a beginner's mind.
Prayer has been variously defined as the "ascent of the mind to God" (John of Damascus); "the opening of the heart to God" (K. Rahner); "a response to the prior love of God" (D. Steere); a way to know God "face-to-face" (G. Buttrick); and so forth.
Prayer can be silent and contemplative or vocal. It can be private or public. It can be ritualized or spontaneous. It can be in one's own tongue or by speaking "in tongues." It can also be in a tongue of ancient civilizations, such as Latin or Sanskrit. In brief, prayer is a uniquely human activity with significant religious content.
Prayer assumes certain things about the God-man relationship. First, it assumes that there is in fact a God who is there to listen and possibly respond to our prayer. It could also assume that our prayerful activities may inspire through grace a favorable response--which is dangerously close to, but not equal to, magic. However, when prayer goes beyond basic communication on an I-Thou basis to negotiations for favor many people part ways on that point:
Meister Eckhart said: "When I pray for aught my prayer goes for naught; when I pray for naught I pray as I ought." His position was that prayer for anything other than worship of God alone is idolatry, a position which has been shared by other theologians--just as it has almost universally been ignored by hordes of people who selfishly and shamelessly pray and chant for all sorts of favors.
Prayer is historically related to magical chants, whereby certain utterings hopefully elicit positive responses from the targeted deities. Today's world is not so simple, but this has not changed the direction of prayer, only its focus. Old styles of prayer persist in many religions. For example, Tibetan Buddhist prayer wheels, some of which are quite large, contain sacred texts which must be rotated in a clockwise direction to release the benevolent powers latent in the mantras therein. Rotation in a counterclockwise direction could release evil powers, so it is believed.
Predestination is an ancient and contemptible idea. Ethically, it must be rejected, even if we are in fact ultimately predestined. It has two forms: (1) that the future is determined by past decisions about our fates, and that all things are ultimately predetermined by God; or (2) that the future is mechanistically determined as if everything were billiard balls. Such a billiard ball universe would completely negate our free will and absolve us of all moral responsibility.
Augustine is the West's premier predestination source. He held that almighty God chooses some to be saved through his mysterious grace. Because all people are born with original sin, this strange preselection process was decreed to be just by Augustine. John Calvin and Martin Luther also held strong predestination beliefs. Calvin spoke of a "double predestination," so that Christ died only for the sins of those already predestined for salvation! Calvin's position was very strange, because it was the equivalent of fixing something that is not broken.
Things got interesting when religionists tried to decide who was preselected for Heaven, and whose lot it was to sink to Hell anyway. Some groups held that signs revealed individual fortunes; but other groups held that our ultimate fates could never be deciphered, because the whole process is a mystery beyond man's consciousness.
As regards the second group, nothing more could be done than to try to live a holy life. The first group, in contrast, tended to extreme efforts to uncover who was to go where, engaging in such quaint activities as torturing alleged witches to rescue their souls.
Some groups, such as the Jehovah's Witnesses, have also tried to count the population of the saved, citing the Book of Revelation, which indicates a total through time of only 144,000 who will be raptured. Why such a tiny sliver of the total population of the Earth, past, present and future? Is it because God's city is only so large and cannot be crowded? How does God know in advance of their birth how many worthy people will live before the end?
Epistemologically, neither prayer nor predestination can fully be understood by humanity. Furthermore, there is a logical paradox to it all: If the journey is automatic anyway, why mess with the controls? If the journey is not automatic, just what are the controls?
Prayer is indeed a valid and valuable way for us to attempt to communicate with a divinity who is beyond our logical understanding. We pray because we don't really have a two-way "telephone to God." What we do have instead is, so to speak, a telephone handset into which we talk through prayer. We don't know if our message is going through to the other side; and we don't really know what the response is or will be. It's a one-way conversation filled with hope. Except for the deluded who claim to speak with God, the best we can hope for is to talk to God.
Prayer has never required a response from the divinity to justify the prayer itself. It is said that even though God knows all our thoughts and emotions, even without prayer, it still is good form for us to adopt a prayerful relationship toward our God. Pure prayer falls into the "can't hurt you, and it might help you" category of religious activity. This is one reason why all religions engage in prayers of various forms. Additionally, prayer is often a social event, which contributes to the cohesiveness of the religious body.
There was a big debate within the Church during the Middle Ages over "salvation by faith" vs. "salvation by works." This was a key element in Luther's split from the established church. That debate has never been resolved, because we can give witness to our faith through pure works.
However, salvation by works introduces a potentially dangerous element which is usually treated like a deadly virus: If pure, holy works alone are sufficient for everlasting life in Heaven, then it is not necessary to belong to any particular religion. Therefore, most religions fall back on the salvation-by-faith formula for the greater part of their path to Heaven.
Of course, neither faith, nor prayer, nor works are relevant to the hereafter if we are purely predestined anyway. But I doubt predestination is our fate: God would become bored with a predestined population. Predestination would reduce us to the level of billiard balls. God wants to have playmates in the universe. Otherwise, it would be very lonely with only oneself to talk to.
When the first Europeans arrived in America during the 15th and 16th centuries the original Americans they met hardly knew how to relate to them. The first English settlement at Roanoke Island in what is now North Carolina apparently met with disaster. We also do not know much about the fate of the Vikings who visited and colonized the New World centuries before Columbus. We do know that some of these first European settlers were well received, as honored in the tradition of Thanksgiving, our national holiday.
For all practical purposes, to the Indians these first Europeans arriving in relatively large ships were aliens from another world. The Indians made guesses about the population and wealth behind these travelers. The truth is they were totally unable to conceptually grasp the European mind set and, ultimately, the European challenge to their native American culture. Indians could not understand messianic religion with aggressive preachers and Manifest Destiny ideology; nor could they understand European concepts of property. In brief, even though these new arrivals from Europe were genetic humans too, they were representing a lethal culture very alien to Indian norms.
For thousands of years people everywhere have reported alien visitations from beyond Earth. Most recently, UFOs have assumed the status of icons of pop culture. Movies feature cute ETs. Television has Alien Nation visitors and other anthropomorphic guests from the void beyond. Boris Karloff's visitor from the past (The Mummy ) is no longer as interesting as today's high-tech visitors from the future, even if that visitor might be darkness from our own future (as in The Terminator ).
Hollywood has often portrayed aliens as understandable in human terms. However, some alien creatures are portrayed as evil (Alien ), even though they may just be hungry. Most celluloid aliens seem to transform themselves into terms we can at least superficially understand (such as in The Invasion of the Body Snatchers).
Hollywood formulas are fairly predictable, but there are no such rules when it comes to real UFOs. We may even be co-inhabited by invisible aliens who may number in the billions right among us, living in another dimension, possibly not even knowing of our existence. One could even argue that microbes represent alien forces which are not even aware of us except as nutrient soups.
Theologically, the first and the second coming of Christ are akin to alien visitations. If Christ came from "out there" to live among the fallen flock, then his appearance was literally an "alien visitation." True, he assumed human characteristics, but that may be just like the Olympian gods who mingled with humans in human form, or like Krishna who appears as a beautiful young man.
Humans are both logical and illogical. Human logic is flawed, but only because these flaws help us avoid unnecessary and inefficient "possibility crunching" which in the here-and-now is not conducive to survival in a mercurial environment. We look at anything new, alien or otherwise, in terms which are extensions of our own previous experiences. This is what we must do when we are not using formal logic. Even our formal logic is restricted to human consciousness.
After the aliens arrive we think we no longer have inductive fantasies. It appears that we have a fresh set of facts from which we may fantasize. Induction "changes" to deduction in such a scenario. Since we trust our deductive powers more than our inductive powers, we rush to judgment, in part to ease anxieties about threats to our existence.
It is only when the new slowly evolves that we do not experience the shock of alienation from aliens. If aliens were to announce and describe themselves years in advance of their arrival, then our reception of the first visitors would probably proceed quite smoothly. Contrast such a prepared reception to what would likely happen if a giant saucer were to suddenly plop down in New York City's Central Park. (They wouldn't stay long, at any rate, since the occupants would soon be mugged; and their space ship would be towed away for illegally parking.)
UFOs present a problem to human thought similar to the question of divine dimensions and intentions. Both phenomena are beyond human experience; and we cannot see all the borders, many of which must forever remain beyond our powers of understanding. Therefore, we fill in the gaps with inductive and deductive conclusions, which may or may not accurately reflect reality. The problem remains of the omnipotent deceiver, both in the form of extremely powerful UFO civilizations, and in the divine power itself.
It is too easy to dismiss popular religions as organized superstition. Even though they can accurately be described in these terms, religious phenomena are much more complex. Even though the computer life forms would architecturally be immune to any emotional yearnings underlying human religious impulses, the comphumans would not be immune to the simple need for a "ground" upon which to operate in the hear-and-now.
There must always be a background upon which the figures are projected. This gestalt works for religious impulses as well as it does for visual perception. The transcendent is the ultimate duality for our concrete existence, both human and comphuman.
Religions are popular because they are structural-functional. They function to support structures, which in turn function to support their ideologies. Even when societies themselves metamorphose, the underlying current of human psychology remains constant. Since most religions are somewhat outside the political process, they can help bridge the gaps between old and new political orders. Individuals cling to organized religions, new and old, because life in the raw has more questions than answers. We all crave answers which supply road maps for our lives. Questions are like intersections without markers. We are afraid of making the wrong turn and getting lost. We crave clearly marked road signs on the highway of life. Religions are all too happy to sell us whatever road signs we want.
I know several people who stabilized their turbulent lives when they embraced religions that filled emotional gaps. Their religions provided rituals and dogma that helped soothe these people's chaotic emotions. Most importantly, these religions got their adherents involved in active rituals where the believer achieves a sense of empowerment. No longer is cold fate dominant. Ritual can control, or at least influence, our feelings about the fickle finger of fate.
It is easy to ridicule such fantasies, but it is not easy to dismiss them. The world is full of illusion; so why not have another type of illusion to compensate for all the other illusions? Since absolute clarity is impossible, we gravitate toward as many signposts as possible. This leaves us fewer perceived questions.
With fewer questions in our minds at any moment we are more likely to come up with what we feel are satisfactory intermediate answers that "work" for our lives. This all adds up to a feeling of predictability, instead of a shaky feeling.
It could be said that organized superstition is soothing for the individual's psyche. That which is theologically not authentic is psychologically authentic. Such operational "truths" challenge philosophy, but only on an existential level.
Ultimately, truth has its own domain, which is outside human fears. This domain is similar to Plato's ideals: always there, ready for us to access to the limit of our abilities. When we are ready to overthrow most of our superstitious fears, truth will be there for us.
Clouds drift overhead as we hurry about below. We think we are very important, but mostly we are like the clouds. We exist, not independently, but as part of our environment. Eventually we drift away, or simply disappear. This scenario seems cruel, but just as there is only so much room for clouds in the sky, so too there is only so much room in the biosphere for people, all of whom consume and pollute.
In another way, clouds are like people. We are mostly space, not matter. As energy collections, with vast spaces between our atomic and subatomic particles, we are literally more space than substance. But we function as a whole, and even appear to others sharing our consciousness as wholes.
Clouds appear "solid" when seen from a distance, even while we know they are wispy and ethereal. Humans are also ethereal, the fact of which we know and fear deep in our hearts. That is why we erect religious institutions to shield us from our mortality.
"The hour of departure has arrived, and we go our ways--I to die, and you to live. Which is better God only knows." -- Socrates, Apology (Benjamin Jowett translation).
Death gives meaning to life. Without death each day of life would be an infinitesimal part of an infinity of longevity, rendering each day mathematically meaningless in light of the endless whole. But death sharply limits our allotted number of days, which means each day assumes a greater percentage of a precious and finite whole.
We do not value mere existence. Humans value freedom and mobility in life. Rocks "exist" for millions, even billions, of years; but so what? We wouldn't trade places with any other living entity, even with a tree that could live 3,000 years. We humans are defined by our sensibilities, without which we are functionally dead, even though we may have a heartbeat. That is why society allows doctors to "pull the plug" from human vegetables.
The conventional face of death is represented by an old man such as the Grim Reaper. Still, there are places on this blue orb where two out of every five children die. For these poor souls the face of death could just as well be a sick child. Americans are very uncomfortable with the idea of juvenile death, or even with the idea of disfigurement and disability. We want to quietly institutionalize people who remind us of our own potential weakness. This partially explains the growing popularity of nursing homes.
Ideally, we want our own lives to ascend to a pleasant plateau, and then remain there for a long time until death appears without notice for a swift and painless end. In this dream, death becomes an "accident," rather than a fundamental bracket on our entire life.
From a systems theory perspective death is the ultimate entropic fate for the individual physical body. Beyond the moment of death there is no reversal for our earthly bodies, only decay. Accordingly, the individual does everything possible to survive and prosper, hoping to postpone that fate. That is why we are especially uncomfortable when presented with a personal picture of death, as that of a friend. It is one thing to deal with death statistics. It is quite another to witness the personal face of death.
Religion's main promise is life everlasting, either through one spiritual ascension to Heaven, or through a long chain of reincarnations until one's cycle of the pain chain of birth and death is broken. In the West the flip side of Heaven is also sold in a negative sense. We see the Devil personified, and the flames of Hell are located in the volcanic bowels of the Earth. Dream or fact, this approach-avoidance product is eagerly sought by swarms of fearful followers.
When an individual's death is understood as part of the cosmic order the entropic power of individual death is transcended within the embrace of that cosmic whole. However, elevating an individual's death to union with the cosmos can also be the ultimate dehumanization of our personality, since our individual existence is merged and submerged into the whole. That is precisely why schemes abound in religion to perpetuate "personality" through spirit worlds we inhabit after death.
Ancient societies were defined by mystical, magical rules. For example, Aryan Druid priests in pre-Christian Europe and in Asia consumed hallucinogenic Amanita muscaria (fly agaric) mushroom drinks, which helped them summon the spirit of their god Soma. The forces of Nature were hidden from everyday consciousness in ancient days, and so these forces were given recognizable divinity by their priests and shamans. Shamanistic specialists put a human or animal face on occult forces, attributing them to ancestor spirits and other demons that could be appeased through appropriate rituals.
Even Neanderthal man has been shown to have buried his dead with flowers, food, hunting weapons, fire charcoal and other valued items. Romans employed flaming torches to guide a departed soul to its eternal reward. The word, funeral, comes from the Latin, funis, which means torch.
The tradition of wearing black to a funeral and during the period of mourning did not emerge from respect for the dead, but from fear of the dead. The idea was to cover a white person with black garments to confuse the deadly spirits. In Africa black people put white colors on their bodies to similarly confuse the spirits that would invade the bodies of the living.
Coffins were originally constructed out of fear of the dead, not respect. Burying people six feet underground was good, but a wooden box made the stay more permanent. Hammering many nails into the coffin lid helped defeat would-be escaping spirits. Early tombstones were placed on the coffin itself to help weigh down the lid. Another stone often was placed flat on the soil above the coffin.
Only in modern times has the tombstone emerged as just a respectful marker. Today's descendants freely and respectfully visit family graves, which is a far cry from the old tradition where relatives avoided graves out of fear of spiritual contamination.
With the "march of science," especially in the 20th century, there appear to be fewer practical questions regarding how the physical world works. To many people powerful scientific predictions are much more valid than metaphysical insight. Nevertheless, to many other people the reverse is true. It is not that one group is rational and the other irrational. In truth, there is no philosophical difference between the two approaches, since both involve induction from the known to the unknown.
Science has a better track record than mysticism for short-term, concrete predictions. However, both methodologies are equally impotent for totally explaining transcendent concerns.
The primary recruitment advantage codified religion has over science is that religion doesn't admit doubt. Revelation and doctrinal belief suffice. Science, on the other hand, is defined at the core by systematic doubt. Humans in fear don't want doubt. They want certainty. That is why so many people are addicted to religion, and why so few really understand or respect science. Denial of death is the first line of psychological defense. But is denial ultimately a Maginot Line?
What do rats, humans and God have in common? Unless you are really strange, don't try to duplicate the following experiment: Start with one ordinary rat. Insert said rat into a very large, open jar that is half-filled with water. Do not plan on ever rescuing the rat. The rat will display prodigious swimming endurance, always hoping to escape through the opening, since it does not know your nefarious intentions. However, once you put a lid on that vessel, so that the rat can see it has no escape hatch, it will quickly tire and drown.
In key ways humans relate to their God as if that God's promise of life everlasting were an open escape hatch. Life for humans is like treading water in the jar. Physical death is like drowning. Survival after death is like the spirit body (the "real self") escaping through the jar's opening, even while our physical shell drowns. It is easy to see with this parallel that many people need the feeling that their "real spirit self" will personally "get out of this world alive," even when they flippantly joke about their impending physical death with hollow humor.
Very young children are innocent of death, and it is not surprising that they also display little lust for religion's promises, except to the degree that they have internalized their parents' and society's fears. Young male adults usually feel "ten feet tall and bulletproof," so most are reluctant to independently commit to any religious discipline. On the other hand, women with children are closer to the drama of life. Their closeness partially explains why more women than males are active in many churches.
One of the primary reasons most people do not seize the opportunity to refine their ethical lives before old age is because we humans are very present-tense oriented. We focus on daily details at the expense of the big picture, until we think we smell the burning of our future flesh. This seize-the-day attitude is an ancient survival script that can be traced back millions of years. To put it simply, any ancient animal that spent much time thinking about the future (assuming such progressive thought were possible) would be at increased risk for losing its life.
Not only do we often wait until old age to seriously ponder our personal path after death, we also spend decades mindlessly abusing our bodies with cigarettes, alcohol, excess calories, and unnecessary stress. We defile our temple as if our bodies really didn't matter, which is not too far off from what certain "spiritualist" religions say. However, even if the spiritualist position were correct, degrading our physical potential through self abuse can degrade our spiritual potential, if only because we may live fewer years through which to acquire enlightenment.
Procrastination over the ultimate questions can only go on for so long--unless, of course, one has internalized a guaranteed-Heaven promise, which the vast majority of people conveniently have done. With the remaining minority there comes a time when one assesses what has been, and what could have been.
What triggers such a fundamental shift in consciousness? In many cases it is a life crisis that leaves one at death's door, without going through that door. It could be a critical sickness or injury, or the death of a family member, or possibly involvement in a war. The possibilities for direct encounter with our other self are many. We need only to be slapped hard in the face by it. But we also need to be brave enough to seize that opportunity to finally define our personal being, our fingerprint on the universe.
Self-definition is not simply an intellectual exercise. The emotional brain must also be engaged. The emotional brain has only a few categorical channels, one of which is pain. Pain is significant because it is associated with both life and death. Even though we prefer to define ourselves in positive terms, we are occasionally forced to admit that pain is part of our being. Pain is just as essential as pleasure, possibly more so, since survival is primary, while pleasure is secondary at the rock bottom of existence.
All people approach pleasure and avoid expected pain. Still, pain is part of pleasure's equation, though not directly. Without a life frame which includes memory of pain, pleasure would have no meaning. If pleasure is not defined by a beginning and an ending, then pleasure self-extinguishes. Another way of looking at this phenomenon is to imagine an endless supply of meals with nothing but our favorite food: What begins as a sensuous delight quickly becomes torture.
Bad things make good things good. And so it is with pleasure and pain: Pain justifies pleasure because pleasure is not endlessly continuous without any standard other than itself. Similarly, the bracketing effect of rainy days enhances sunny days; illness enhances a healthy recovery; darkness enhances daylight; and so forth.
Fear of the pain of death is equally necessary for life to have value. I am not restricting this aspect of pain to nerve pain. It also includes existential pain brought about by our alienation from everything that we have become. Fear of an unpleasant afterlife, such as Purgatory or even Hell, is a minor fear compared with the primal fear of total annihilation.
Despite this fear, and to a surprising degree because of this fear, we humans are able to carve out our values and experience joy from the emerging present.
Pain could be seen as a dress rehearsal for death. Pain "dresses up" life by giving it sharper meaning in an otherwise meaningless universe. Pain is a siren warning of dangers that could overwhelm our homeostasis. Death is the pain beyond pain. When it comes death is not a stranger. Indeed, some chronically ill people look forward to death as a release from their pain. The Buddhists speak of nirvana as breaking the painful cycle of birth and death.
Pain is part of the necessary dualism that establishes our individual existence. After all of our pain is gone the self recedes into history, eventually blending with all other things in the universe when the last living memory is lost of our past.
Scientists believe the Cro-Magnon cave painters felt a need to have something survive their bodily existence. The polytheistic Greeks believed that social immortality could be established by great works which were subsequently honored by statues and other monuments. In the end there always comes a time when even marble statues crumble and cave paintings fade away. Dust covers and conquers all memories.
Facing physical death must be an emotional event, not an intellectual exercise, for this encounter to reach our consciousness. If death's face is merely statistical it "does not compute" in our emotional minds. This perverse truth was known to Joseph Stalin who once cynically remarked that a single death is a tragedy, but a million deaths is a statistic.
Similarly, we humans relate to self-inflicted dangers in our daily lives in a childlike fashion. Even though cigarette-related deaths involve far more people than all Americans who have ever died in war, smokers still like to think of cigarette deaths as individual and natural. It would help to portray those self-inflicted, "individual" death statistics in terms accessible to our media-conscious minds: The number of Americans who commit delayed suicide due to smoking-related illnesses is equivalent to three jumbo jets packed with people crashing every day. And how many more jumbo jets would we need daily to accommodate the legions of fools ("victims") who perish from alcohol abuse, crack cocaine addiction, fatty diets, bad driving, unsafe sex, murder, and a long list of other irrational acts?
Just as death is one of the bookends of life, birth is the other bookend. Popular thought has it that birth is an absolute good, and death is an absolute bad. In a way this is true. But birth and death are properly seen as just the alpha and omega points of one continuum, which is neither good nor bad. That continuum is our life, and we decide whether or not our life will be good or bad.
Death can also be seen as necessary to justify the "goodness" of each new birth. The Earth's surface has only so much carrying capacity. Malthusian forces work so that human population tends to increase geometrically, while resources can only tend to increase arithmetically. If we ultimately are to avoid "positive checks," such as pandemics and a global nuclear holocaust, then we need to balance out the ratio of births and deaths.
In a land where there is no death from age there would ultimately need to be no births, except as replacement for those who died by accident or illness. Our genes value variety. It is only by mixing gene sets through sex that new generations can adapt to new environments. Life on Earth demands individual death to support a robust gene pool.
Knowing exactly when life begins and ends would simplify our value choices, if life were an either/or phenomenon. Unfortunately, science and religion have muddled the timing. I am not talking about obvious life and death, but rather about the indeterminate borders of becoming, being, and non-being.
Because the appearance of human life is as much social as it is biological, we are able to ask when life should begin. The same applies to biological death, a phenomenon which is complicated by modern medicine's ability to revive some clinically dead people.
The debate over the biological and spiritual beginning of life has been hot and caustic. The Roman Catholic Church, for one, says human life begins at conception. The Supreme Court in Roe v. Wade implied that life begins when the fetus is viable outside the mother's womb. Others have assumed that life begins at birth. It is interesting to note that the Japanese start counting their age from nine months before birth, which would seem to agree with Catholic doctrine--whereas the entire West starts counting at the moment of birth, which apparently contradicts Catholic doctrine.
This moment-of-personhood question is one of those tormenting questions for which the final vital data will never be available, except in the minds of those who follow codified religions. Within modern American society this critical question has become a political football between "women's libbers" and conservative "pro-lifers." In fact, the question of personhood has nothing to do with lifestyles and such red herrings as the "right of a woman to control her body."
If a divine "soul" is fully implanted at conception, then two individuals with rights temporarily inhabit one skin: one as biological host, and the other as biological parasite. If, on the other hand, the soul is an emerging expression of development, similar to the emergence of intelligence, then it would be hard to draw a sharp line between the "it" stage and the emerged "human" stage. How would we know that a soul had sufficiently emerged to qualify for membership in our community? How, indeed, do we objectively and neutrally define "qualify as human"? Is biological viability a sufficient measure for spiritual viability? Do individual souls emerge at different times, even after birth?
Knowing how we humans like tidy answers, not unsolvable, fuzzy questions, is it any wonder that codified religious answers for this dilemma are so seductive?
And what should we make today of the archaic Scandinavian tradition of literally "raising" a child? A Swedish father, even as late as the 17th century, would raise a toddler to his feet in a ceremony of acceptance into the family. Otherwise, the infant would be tossed outside to die in the frozen wilderness. Was that discarded tradition totally indefensible, partially indefensible, or somehow economically justifiable within the context of the times?
I suggest that before we moderns rush to judgment we consider the plight of millions of babies we all "know about" in the Third World who die each year from disease, diarrhea, and malnutrition. Was the Swedish father evil because he directly condemned his surplus or defective child to save the rest of his family from starvation? And are we well-fed people morally clean when we don't directly confront today's starvation elsewhere on this planet?
We must conclude that the "pro-choice" argument of protecting the adult woman's body is actually a political argument, categorically different from addressing the primal problem of the threshold of life and the right of any living human to get a start on life. What this also means is that the "pro-life" argument for the moment of conception also being the moment of personhood is not clear, either by fact or by logic, since other conclusions are equally valid.
In sum, the balance among all arguments means that Roe v. Wade was a compromise solution to a theological problem that can never be solved by reason and science.
An exquisite irony presents itself to us moderns: If we help feed all those economically and ecologically marginal Third World children, and then a generation later those same children breed three or four times as many more marginal children--does today's humanitarian feeding action justify the creation of intensified misery and death a generation later?
Perhaps the best solution both in the Third World and in our modern society is to stop the problem's growth before it becomes a greater problem by using effective birth control. Birth control is far more cost effective (and theologically more palatable) than abortion at any point in a pregnancy.
Let us turn now to the drama of euthanasia for the terminally ill. Society has been debating when to "pull the plug," and it appears that "moral guidance" has been given to us by financial expediency. Yes, this is a practical solution, but not a solution in the pure sense. It is just another version of the "out of sight, out of mind" phenomenon--only here the patient cooperates by dying sooner rather than later.
It has been effectively argued that since God did not provide the "plug" we humans have the moral right to pull our plugs. This argument is another way of saying that such sick people would have died sooner anyway in years past. But isn't this another form of "playing God"? It could be equally argued that we are compelled by our new wisdom and technology to care for people even longer than before possible.
Another potent argument for "pulling the plug" refers to the disproportionately large amount of medical resources spent on a few critically ill people, in contrast to the small amount spent on prevention and other highly cost-effective community medicines. Waste spawns a strong triage argument, since we humans have not yet found a way to fairly fund unlimited medical bills for all people everywhere.
Back in Stone Age times such arguments about the end of life would have been absurd. Either the body succumbed to childhood illnesses, died in battle, perished in famine; or one lived to a moderately full life span. Because so few lived beyond fifty, and most didn't make it even past their twenties, the concept of nursing homes would be absurd. (The Bible cites many ancient patriarchs who lived to incredible age. The Bible also implies that the Earth is about 6,000 years old. Somebody's calculator ran out of batteries.)
At least back in the good old days a dead person usually had a rock-smashed skull, or was stone-cold following terminal disease. Critical time sequences were not a concern then, but they are in today's operating room. We live in an age of wrist watches that keep time in hundredths of seconds. By slicing our experience of time ever thinner, we are slowing down existential time. That makes it all the harder to delineate when a person is really dead, rather than almost dead.
Even after a person has rigor mortis, we still haven't answered the emerging challenges of cryonic regeneration and gene banks.
In the final analysis, is life a biological or an existential phenomenon? If it is simply biological, then we are "just" stuck with the basic questions. If it is also existential, then a whole new can of worms is opened. (And you know what they say about an opened can of worms: The only way to put them back is to get a larger can.)
If life is really a phenomenon of consciousness, then the biological questions are all superseded by the will of the living. That opens up a new can of worms regarding suicide, which Albert Camus said is the only real question for philosophy. On the other hand, no life is lived in isolation, so the consciousness of all others touched by a suicide must also be weighed. This leads us back to the web of life, which could also include our I-Thou relationship with that which we call God.
If all of the above is mind-boggling in its implications, remember that the full glory of life can never be crammed into the small can of our parochial thoughts. We must intellectually come to life, so that life can become us. To reduce life and its moral questions to neat slogans is to slander all, including our very life essence. If physical life is a journey from birth to death, then our intellectual life must ideally follow a parallel journey of unfolding discovery.
It has been said in various ways that the only justification for philosophy is death. The philosopher's final job is to prepare man for his best death. The theologian is supposed to further prepare man for the afterlife following death. But what if there were no death at all?
If there were no death the theologian would be out of business. Not having to worry about Hell, selfish people would be free to follow their lower chakras. Not having the dreams of Heaven, people would be free to create their own heavens or hells on Earth.
Only philosophers would still be in business if there were no death, since philosophy is just as concerned with the here-and-now as it is with the hereafter. These two dimensions are linked within theology, but they are not by necessity linked within philosophy.
Morality as a word comes from the Latin, mores, which means the "habits of the people." Morality could be theological, but it is not by necessity theological. Since much of philosophy deals with morality, people should not lose interest in philosophy, even if death were denied.
The brutal truth is that physical death has not been overcome, nor is it expected to completely disappear among humans. The question of life after life is always an open, puzzling problem. Nature abhors a vacuum, so there will always be theological options for people to consider, and religions for people to embrace.
In the real world death is necessary for life's full development, if we seek all the sources of life's meaning. Morality obviously is not everything there could be said about the meaning of life. In the final analysis philosophy often focuses on death because that is the massive reality each of us faces. But death is not the exclusive concern of philosophers, nor is preparation of man for a "good death" the sole purpose of philosophy.
If death is the final book end, there still must be a "book" for the book end. Therefore, helping people write their own book of life is the ultimate service of philosophy.
Whether or not there will be for us an afterlife, it can clearly be argued that a noble and examined life is preferable to an ignoble and brutish life.
"So God created man in his own image, in the image of God created he him; male and female created he them." -- Genesis 1:27.
"Neither shall they say, Lo here! or, lo there! for, behold, the kingdom of God is within you." -- Luke 17:21.
Strictly speaking, theology and religion are hardly related, except where they overlap on the question of divinity. Religion is a reflection of social mankind supported by the emotional brain. Theology may be tied to the physical and moral world, but it need not be. A computer could evaluate theological options for beings anywhere in the universe; but never could a computer emotionally feel religion as we do. If a computer were to develop a religion for comphumans its religion would not be human religion.
Humans have great difficulty separating theology and religion. That is because religion, expressed in terms accessible to the emotional mind, determines how we conceptually experience the divine world of theos (Greek for god). Religion is, literally, an intensive binding (Latin: re-ligare ) to the world of emotions. Because most religious conclusions are reached from dogmatic premises, those conclusions are pseudo-scientific--which is a nice way of saying that all such conclusions are intellectually dishonest whenever they pretend to offer final answers.
Some readers may be shocked to learn that their deeply felt religious convictions may be theologically "dishonest." However, they should know that theological dishonesty is not at all the same as personal or social dishonesty. With theological honesty I am describing a pure form of intellectual honesty where standards are absolute fidelity to truth. The everyday standards of individual or social moral conduct are in a different dimension. It will be necessary for us to understand both dimensions of honesty before we can perfect our ethical dimension.
Religious thinkers reason backward from their assumed conclusions about ultimate reality. They often offer evidence outside the world of logic and, when pressed by logic for their defense, they resort to the hackneyed concept of "mystery."
Philosophers of theology also deal with the same questions, but they don't usually start with a dogma to defend. Ideally, philosophers should not reason backward; but they too often concoct backwards arguments in the same bogus way as do religious thinkers. Theologians are also humans with emotional minds. Furthermore, there is no social incentive for philosophers to produce theologically honest answers.
True philosophical honesty frightens all people up to the insightful instant they understand what lies beyond the defensive wall of their prejudices. Even the greatest thinkers have veered away at the very last moment of their quest for Truth. They deeply feared chaos on the other side--when in fact the real chaos was on their side!
This strange psychological fact so shocked me when I stumbled onto it in the early 1970s that I spent almost a year trying to disprove it. I intensively read many philosophical books and essays looking for somebody, anybody, who actually approached the question of theology from a peaceful point of pure honesty. All failed, but some came close. Those who came closest include some of the greatest thinkers of all time: Plato, Pascal, Kant, and a few others.
I was able about twenty years ago to develop structural elements of the Theology of Hope. What evolved from those early thoughts is the first totally honest theology that does not contradict itself, yet still includes God in the equation. The Theology of Hope is agnosticism with God. It bridges the emotional rift between religious theism with God, and atheism without God.
The appeal of classical, skeptical agnosticism was its supposed scientific honesty. However, earlier agnosticism's positivistic honesty, as formulated by T. H. Huxley and Bertrand Russell, had a fundamental theological weakness that the Theology of Hope has now rectified.
Religion and atheism are equally in error. Religion claims it knows about God; while atheism says it knows there is no God. Both unproven stands are equally dishonest, theologically, even while the subject matter of their conclusions remains critically important to us.
Let us detail the basic elements of the Theology of Hope:
Theology indicates that God is the proper, ultimate source of transcendent reference, not human habits and emotional needs, because of the question of the afterlife and relative time values. It is the logic (Greek: logos ) of a creative theos that focuses us in our ethical relationships--not religious bureaucracies, blind tradition, or any of the other dogmatic baggage here on Earth. Here is morality with a divine focus.
The ethics within this honest theology are concerned both with the definition of the good and with rights and duties. It is important to note that while man lives through concrete situations, not intellectual abstractions, the individual's ultimate reference is within the I-Thou context of man-to-God, not the I-You and I-It context of man-to-society.
Hope is to be sharply contrasted with religion's belief: Belief is a logically confused probabilistic concept at the very best, and often a bald statement of assumed knowledge. Hope, in sharp contrast, is an honest possibilistic concept within the context of one's existence.
Taken all together, the elements of the Theology of Hope represent a framework for the development of actualization without dishonest self-contradiction. Not only do we thereby develop the highest possible relationship with our fellow sentient creatures (humans and, sooner than later, comphumans), we also develop the highest possible relationship with our God.
The Theology of Hope is a transcendent inspiration for the emerging 21st century because it belongs to all sentient creatures, not just to historical in-groups or chosen people. Only such an honest theology has a future, not just a past--because this way of thinking is not bound to past mythology or beholden to present religious bureaucracies. Honest theology is not "mysterious," but instead blends the best aspects of logic and existential authenticity in direct relationship with that transcendent reality we know as God.
In an earlier chapter we envisioned the Hall of Truth, where one could open any finite number of truth-telling doors down both walls of an infinitely long hall to reveal aspects of truth.
In this chapter we are going to take another imaginary journey. This time we go to Heaven itself. In this second journey we feel transported through what seems like rapture to the Pearly Gates, where we pass someone who looks like Saint Peter; then we continue on to the center of what appears to be a holy city, where we feel we meet God and Jesus.
If such a miraculous event were to apparently happen to me, I still would be forced by the rules of theological honesty to say that even this direct evidence is not sufficient proof of the exact nature of God and Heaven! No sensual evidence on our part can ever provide verifiable evidence, only the appearance through experience of evidence. Yes, even apparently "being in Heaven" is not proof enough at the standards of pure honesty, because of the possibility of an omnipotent deceiver.
The omnipotent deceiver is a forceful possibility first discussed by René Descartes. He posited that if we were manipulated by an omnipotent deceiver we would not be able to tell what is true and what is not true. Because we are finite in power and time, and because such an entity would have superior power by definition, we would never be able to independently comprehend its deceiving games.
Descartes speciously concluded that there could be no such thing as an omnipotent deceiver, since deception would contradict the perfection of God. As he explained in the Fourth Meditation (F. E. Sutcliffe, trans.): "I recognize that it is impossible that he should ever deceive me, since in all fraud and deceit is to be found a certain imperfection; and although it may seem that to be able to deceive is a mark of subtlety or power, yet the desire to deceive bears evidence without doubt of weakness or malice, and, accordingly, cannot be found in God."
Descartes accidentally implied that God, though omnipotent, could not be totally omnipotent. Such a logical contradiction emerges from not allowing for God's power to be anything God wants to be, including a deceiver of us poor humans. To keep God omnipotent, therefore, we must allow God the option to also act as an omnipotent deceiver. Such is the only theologically coherent conclusion, even while we humans hope that this relative evil is not the way things actually are.
Another possibility is that there is at this time no positive God watching over us, so that other apparently "omnipotent" deceivers can act to fool us.
If even what appears to be a visit to Heaven cannot be sufficient proof, then what is sufficient proof? The brutal truth is that there can be no human proof of any sort that totally eliminates doubt. The problem is that we cannot place any degree of probability on any data, especially transcendent data. All we can do is exclude from further consideration self-contradictory presentations.
Still, it is quite possible that our perceived trip to Heaven and a visit with God would be in fact be exactly as it appeared. The problem is we cannot independently verify this with any degree of probability. We can only accept this visit as a matter of faith.
Some would argue that direct sensory evidence from apparently answered prayers and apparent miracles is sufficient proof to support belief in a benevolent God. This argument is similar to what was encountered in the visit to Heaven, but it is more seductive since it appears to require a lesser standard of proof, and since it appears to have the weight of our senses. Nevertheless, all such "weight" is theologically weightless. The omnipotent deceiver could work on all dimensions of human perception and apperception, cleverly deluding us toward any error.
Just as God cannot be excluded from the equation (atheism), we cannot automatically include God (theism). Agnosticism would appear to be the proper perspective. However, agnosticism in its earlier form was a crude vessel which did not take into proper account the possible emergence of transcendent values within the context of our lives.
Because those values can be theological values, agnosticism must evolve into the Theology of Hope, which is the form of ethics that does not deny God, while maintaining a philosophical honesty. Here is the first honest theology; the first "religion" without self-contradiction; and the first agnosticism that fulfills the promise of agnosticism.
Very importantly, the Theology of Hope is what will be thought up anyway in the 21st century when comphumans turn to ethical analysis, because it is the only logical outcome of transcendent ethical thought.
When René Descartes conjured his famous "Cogito, ergo sum" argument, he was attempting to strip away tradition and build on a base of pure thought from which pure conclusions would be built. His effort was a sterling failure. It was sterling because it launched the modern post-Scholastic tradition of analytical philosophy. It failed because Descartes proved nothing.
Others before me have noted that the Cartesian "cogito" statement is logically the same as saying "I, therefore I." Not too profound! The problem with Descartes' formula is assuming the "I." Nietzsche tried to improve things by saying: "I think, therefore thinking is." Whereas this revision helped shift the focus to the thinking energy, it still assumed the I. We must make a clean break:
It is better to begin philosophy by saying that "something thinks, therefore something is." In this revised statement we haven't said anything in detail about the "I"--only about the force behind what appears to be the "I." As trivial as this distinction seems, it is critically important to the alpha analysis. The reason will be found in a celebrated argument of a younger contemporary of Descartes, Blaise Pascal, who presented within his Pensées the argument known as "the wager."
Pascal's wager said that there are three logical possibilities regarding God. The first is that there is a positive God; the second is the negative God; and the third is that there is no relevant God. There can be no other logical possibilities. Pascal said that since we cannot know the probability of any of these possibilities, we should wager on the only possibility of the three that could lead to a positive afterlife--the positive God possibility. (His brilliant contribution to philosophy was polluted by his preference that we become Jansenist Catholics.)
Pascal made several assumptions in his wager, not the least of which was acceptance of the "I" in Descartes' formula. I suggest that taking out the "I" was psychologically intimidating to either of these otherwise great thinkers. But if leaving in the "I" makes dishonest any formula, then every such formula must be modified.
Let us look at Pascal's wager from our revised perspective: First, we note that something exists. This is the primary datum on which everything can be built, to the degree that anything at all can be built. Something exists, because either we think, or something acts to make us think that we are thinking. Thus, it is required that at least some time in the past, if not still in the present, there was a force which had the power to direct thinking.
The zero-God possibility from our perspective on eternity does not interest us, unless we think of "God" as a summation of cosmic negentropy, which is quite different from our Western concept of a personal God who can grant us an afterlife. The 18th century concept of Deism was close to this third possibility, because even though there was a God force which started things, that God force no longer relates to us as a conduit to Heaven. (Even with an existing God, but one who no longer connects with our fate, that opens up the door to a powerful deceiver.)
It is equally possible that either the negative God or a positive God could exist. What is not established logically is the degree of connection between the ultimate power of God to grant an afterlife and the possibility of total interpenetration with our every thought and action.
If God strongly influences or determines every action, then there is virtually nothing left to our free will. If we have no real free will, then we have no moral obligations, being reduced to puppets. On the other hand, if we do have significant free will, then we are at least left on a long leash. Maybe there is no "leash" at all. Maybe we are existentially forced to be free and totally responsible for our actions. If so, then we are responsible for, and will be recipients of, whatever value emerges from those self-chosen actions.
Even suggesting that God weakly controls our actions does not clear up the moral dilemma. It is impossible to define "weakly," so it is impossible to determine where the lines of responsibility are drawn. Astrology plays this clever game, suggesting that the "stars" influence our actions, but do not necessarily determine them.
Even if the "cogito" in Descartes' formula is the puppeteer's voice, and we are the puppet--at least it means there is some sort of puppeteer with the power to pull off such a highly coordinated deception. This organized, non-random phenomenon would point to a cosmic order much greater than the puppeteer.
It is also logically possible that our everyday world may be just as real as our senses suggest. It is impossible to logically refute what appears to be human consciousness and volition. Even if our freedom is imaginary, then some sort of demon must be manipulating the show. At any rate, we can postulate that some things emerge from something.
To say that something is absurd for us is to assert that it has no objective meaning that independently can be verified. It may have occult meaning; we just can't see it. All life in the possibilistic universe is ultimately absurd to human consciousness--so we invented the myth of probability.
Probability is only true if the point of reference is solid. Merely assuming something is so doesn't make it true. Even the mathematics of deductive logic is tautologically absurd from a real-world perspective. Also, given the possibility of an omnipotent deceiver, it is not enough to claim that our experimental and experiential evidence "proves" the validity of any point of reference.
It is necessary to embrace possibility as the honest corrective for the hubris of probability. Of course, "as-if probability" is what we use for nearly all of our everyday world, and history confirms this type of use. But there is no inductively valid reference point other than possibility for statements regarding the transcendent universe.
Why then do we hide inside the myth of probability? Frankly, it is because possibility is not intuitive to the basic brain, and it leads to a visceral feeling of the absurdity of life. We fear chaos, and anything other than predictive (i.e., probabilistic) order is viewed with horror within our psyches. That is why common use confuses these two very different words; and that is why there has been very little discussion in philosophical circles about the critical relationship of possibility and probability.
There is a bridge of sorts between possibility and probability which does not lead to self-contradiction. Hans Vaihinger, a German follower of Kant, wrote in the early decades of this century about the as if. By this he meant that we should acknowledge our doubts about the evidence of our senses--but that we can also choose to act "as if" those doubts were not crippling. This means we can live an authentic life, and still retain our integrity through skepticism which comes from knowing that our view of the universe could be largely in error.
In brief, the "as if" is an honest way of admitting our limits of knowledge, while still acting as if there were no limits to knowledge.
To say that something can be is not the same thing as saying that something must be. Only if the universe were truly infinite in time and space could there be enough time and possible interactions among the elements of the universe for all possibilities to manifest themselves. However, this infinitude is an unverifiable and therefore meaningless model to us, even if it were true from some absolute perspective. A more realistic way of seeing the universe of possibilities is to say that we inhabit only a small part of the universe, and that we can never state with precision any probability for any event occurring. Still, we do inhabit a part of the universe, and thereby have a right to act as we will.
We might look for quantum theory and Heisenberg's indeterminacy principle to show us the way out of this dilemma of unpredictability. We might in this way view events as happening by statistical chance, so that any probability statement becomes nothing more than a statistical projection based on past occurrences. In this way the key to indeterminism is used to buttress a statistical path to determine what we can know. Out of disorder, order. Nevertheless, all inductive, statistical projections crumble within the dimension of transcendence.
Bertrand Russell has suggested that chance is one event whose cause is unknown. Even if a proximate cause is "known," that does not answer the greater question of the "cause of the cause," of which the earlier turtles-on-turtles story is one example. In the end, we can never know something without knowing its origins--but if the origins are themselves unknowable, then what do we really know about anything?
Before we go mad with the possibilities, let us back off a bit and see what can be salvaged. First, we can adopt a humble attitude toward the universe, which at least will reduce our blood pressure. Second, our fresh perspective on everything is equivalent to the Zen "beginner's mind," wherein we become as children open to all phenomena without educated prejudice.
If we were creatures of pure chance we would have no moral dimension. All of our actions would be "determined by chance." Similarly, the same loss of morality would accompany the opposite of pure chance, teleological determinism. There would indeed be no moral dimension anywhere if all were chance or predetermined fate.
How we perceive that emerging morality is one of the defining elements of our existence. Emergence is dialectically determined by elements from the past, but not predetermined. The evidence would thus point to a coexistence of chance with order, but their exact dynamics we may never comprehend.
If absurdity is the absence of verifiable meaning, then the meaning we create for ourselves banishes absurdity, at least from our immediate existence. Furthermore, if we do in fact have an I-Thou relationship with God, then our meaning may extend in that direction too. Any sort of teleology (natural or supernatural) diminishes our freedom to create meaning, even while it may clarify our "purpose." As long as we seek purpose and meaning totally outside ourselves we are no longer fully free.
Even if there is an essential, transcendent teleology directing our general actions, this does not deny our existential freedom from such pressures. In all honesty we can never verify any teleology. We are compelled to act as if we are free to act, even though all may be predetermined. Indeed, we are compelled to assume that we exist as autonomous beings, because there is no way to prove otherwise.
Therefore, it is only when we realize our profound and total blindness to ultimate truth that we can see the great truth of our existential freedom. At the very moment of this insight we experience a flash of what Plato referred to as awe. We are awed by the simple truth of who we are and can become. Having received a taste of wisdom we want to know more. We now love wisdom. We are now, literally, philosophers.
Only the highest ethical philosophy carried forward in purity can liberate humanity from the natural fears attending our mortality. Honest theology cannot guarantee immortality. Religion winks and then guarantees what it has no right to guarantee. For most of us a slave religion with its subservience to "the master" is enough for peace of mind, since the primitive mind cares not for truth, only for the comfort of certainty.
On the other hand, for the few who really love wisdom the truth is its own reward. The truth opens a special door the religionist can never access. It is the door to man's godlike essence, whereby we may truly become "in the image of God."
Western religion has seen the triumph of monotheism. Nevertheless, it is not theologically required that there be only one supreme God. Many cultures have embraced the concept of multiple gods, or at least multiple manifestations of one basic reality. The pre-Christian Greeks and Romans had an Olympian pantheon of deities. Indian culture still has thousands of accessible gods associated with Hinduism.
There has long been an undercurrent of doubt in the West about the one-God thesis. The question of "who or what created God" has never been answered, except with the slippery retort: "It's a mystery." Logically, it is just as absurd to base an entire theology on a mystery (that building with no first floor) as it is to speak of the Earth resting on an infinitely receding stack of turtles.
Nevertheless, I feel there is an honest way to build some sort of first floor, at least for our dialogue with the divine:
All we humans really care about is our personal afterlife. We don't care if somebody on the planet Zork, or wherever, gets an afterlife or not. All we care about is our own spiritual journey to a safe haven after the body perishes. For this reason it is quite possible that we could be beholden to a demi-god with the power to give us what we want--and yet that demi-god is less than 100% omnipotent and omnipresent in the context of the total universe. That God could be our "local" God for this corner of the galaxy. Again, all that personally matters is if we relate to some sort of power that can provide life after death for us.
If our local God is not the ultimate, then we are still no worse off than the Greeks who had their many gods, but who still understood that behind all was Fate. The ultimate source of power may be a vast and indifferent universe oscillating among many local big bangs. Such a universe could host local gods with less than infinite life spans.
The main point here is that we don't care about such meaningless and absurd cosmic questions, only about our personal afterlife. It would be nice if the God we related to were the one and only ultimate source of everything; but such awesome powers are not required for us humans to get enough of what we want.
However, people have not always seen things this way:
Anselm, in Chapter II of his Proslogium, clearly laid out the ontological argument for one God: His thesis was as profound as it was simple, because it seemed to rest on the force of rational logic, rather than on the vagaries of perception and opinion. Anselm said God is "a being than which nothing greater can be conceived." He further asserted in his Chapter III that any being conceived greater than God was absurd: "The creature would rise above the Creator, and this is most absurd."
By defining God as "the creator," Anselm personified and in his own way anthropomorphized a process which may or may not be understood in human terms. In the final analysis, his ontological statement about the actual universe was only a formal statement, a tautology with the same logical validity as Plato's discussion of ideal forms.
Separate from the ontological argument is the cosmological argument for God's existence. Cosmological arguments attempt to prove the existence of God from premises about the world itself. Briefly, to explore the conditions of the world leads us to that which is "unconditioned," so it is claimed. Thomas Aquinas in the first part of Summa Theologica attempted to prove the existence of God in five ways. His third "proof," though cosmological, is close to Anselm's. Here Aquinas declared that "there must exist something the existence of which is necessary." Yes, existence must exist. But just what is the nature of that something? Even if we can accept what Aquinas says, his proof does not necessarily lead to Roman Catholicism.
The long search for a first cause continues. It is bad enough to be on a journey with an indeterminate destination. Misery is compounded when we don't even know from where we started; but we feel that we must know. The only escape from such frustration is to accept our human limits with Zen patience.
Thinkers such as Plato ("soul"), and Aristotle ("the unmoved mover"), and later Descartes (the ultimate "thinking being"), and even John Locke struggled to buttress their similarly preconceived emotions. On the other hand, critics as Hume and Kant lashed out at such castles in the sky. At the same time, neither Hume nor Kant repudiated their own sensations as a source of definite knowledge. Kant even waxed mystical with his concept of the a priori, which is equivalent to tossing God out the front door and letting him in the back door.
All of this talk about such beings as an "unmoved mover" and a "first cause" can be traced back before Christ to an earlier and more fundamental dispute over change itself. Even though change appeals naturally to the senses of our locomotive bodies, the pre-Socratic, Parmenides of Elea, strongly argued against change itself. His disciple Zeno added a series of paradoxes designed to show that change was impossible.
Parmenides basically asserted that the first principle is being itself, or abstract corporeality filling space. Voids cannot thus exist, since any change negates being into non-being. Also, since being is continuous it cannot be broken down into components. This argument had a strong influence on Plato who used it as an inspiration for his ideal forms. If nothing else, Parmenides was among the first to construct a picture of reality which was removed from the senses.
Opposed to Parmenides' logic was the thought of Heraclitus, who argued that change was, of course, contradictory--but that contradiction through change is the essence of reality.
We now see a way out of the dilemma of change or no-change, and ultimately out of the dilemma of God or no-God: A Parmenides-inspired theology would embrace an "unmoved mover" God, a God beyond all phenomena. On the other hand, a Heraclitan viewpoint could argue for process without a remote creator. What happens when we combine these two apparently opposing viewpoints?
The creator of our life could not only be subject to change, and possibly to limitations in a world of many possibilities--but also above all there could be the unifying summation of phenomena we may call Ultimate Being. Thus do two elements form a compound different from either, and not their summation.
It is not necessary that there be either a first-cause creator outside time and space, or no God at all. It is only necessary that there be change within unity. It is possible for there to be or have been a God which created us, but not everything everywhere. Such a limited God would be less than equal to the universe, though sufficiently powerful to create all that we know of reality, and even create our Heaven and Hell. Still, there could be an ultimate quietness beyond and prior to any God force we could conceive. Thus our own God could be within time, not beyond time, even if our Creator is beyond our times.
Using a neo-Anselm approach, it could thus be argued that the ultimate God may not be our god, but is the totality of all matter/energy. Such a diffuse concept is alien to human consciousness and to our tendency to anthropomorphize forces. Also, it fails to discriminate between creative and created elements. Nevertheless, modern physics points to the unity of creative and created forces over time. The neo-Anselm argument is also logically intact, distasteful though it may be to some theologians.
We need not stop praying, even though we know that our God might be less than the ultimate force above everything in the universe. Princely subjects don't stop revering their rulers just because their rulers only control part of the Earth's surface. We bow to worldly princes who are, in the big picture, relative dust specks in a cosmic dust storm. So why not extend devotion to a possibility that may be something less than the ultimate deity?
It may be satisfying to argue as Anselm originally did. But it may be more fruitful to argue that it is also possible that reality includes both God and beyond-God. Within the chain of being our meaningful God may be far above us, but still below an ultimate reality which has no personal meaning for us.
Code ethics provide a set of behavior parameters beyond which one is subject to divine retribution. Stay within the lines and enjoy the prospect of eternal salvation, so their line goes. Code ethics are appropriate for a partially evolved species, but not everything for a species that has fully developed its own ethical consciousness to where it becomes "in the image of God." God's essence is not robotic, but creative. Our highest essence is also creative, not robotic.
Code ethics were and still are functionally appropriate for most people. However, code ethics are inappropriate for comphuman consciousness, since there is no emotional brain that swings wildly out of control if not for rigid rules. To the degree that we humans are able to develop our ethical consciousness we also are able to relate to God on the highest level without preformed codes.
God's omniscience would know that we cannot know. Therefore, it is not going to go against us if we step back from slavish obedience to a codified image of God in favor of a more independent approach to ethics. God may look with sharp disfavor on anybody who knew that he could not know--but still shut his mind to blindly "go along" with a package of superstition.
Doglike fidelity is not a substitute for wisdom, or even a shortcut to wisdom. Once enlightened wisdom has been achieved by a conscious entity (human or comphuman) a higher level of ethics is demanded by an all-knowing God. By our higher wisdom we are elevated to be more in God's image.
This is where Pascal is so valuable: Pascal showed us that we could carry on as if there were a God, by betting on a positive God. Even though Pascal went beyond this point to advocate Jansenism, he essentially was not in error:
We all must find some way to conceptualize and concretize our own experience of the mysterious. Therefore, we freely select a package of beliefs, either of our own making or ready for embracing. This is why it is acceptable to still participate within any religious tradition without dishonesty--just as long as we are honestly aware and admit to ourselves that we are participating as if that religion were the true religion.
In our hearts we resonate with the social religion of our choice. Concurrently, in our higher minds we must know that whereas there is only one transcendent reality, there are thousands of religious expressions of devotion to that reality. We are existential beings, not pure floating thought. We use religion to give social form to our theology.
We need the community of our fellow humans to actualize our emerging existential spirituality. This is why I do not oppose any one religion. What I do oppose is any one codified religion trying to eradicate all other religions as heresies.
In the West we speak of Heaven as being an either-or place. Either we go to Heaven, or we go to Hell. This cut-and-dry conception is at odds with other religious traditions, such as that of Hinduism and Buddhism. In Buddhism we speak of many incarnations preceding the final enlightenment. I would like to fuse that Eastern concept with our Western concept, and see what emerges:
If there is a God who is able to see into our souls, then that God would reward us for the honesty inherent in enlightened doubt--an honesty which is still accompanied by an ethical life. Knowing that we have no probabilistic knowledge of any afterlife, only the indefinite possibility--yet still we choose the high road--such a God would be impressed with our pure goodness and free choice. This path is emphatically not a slave's relationship with the good. We approach Heaven through love of the good, not through fear of Hell.
It is possible that Heaven may have different levels of reward. The basic level would be reserved for the true believers who followed their codified religion. They would not be penalized for not knowing that they do not know. This is the same level that could be inhabited by infants who never got the opportunity to understand ethics.
However, a higher level of Heaven could possibly be set aside for those who have evolved within the Theology of Hope. These are the souls who have emerged most "in the image of God." These are the souls who have freely chosen the good, and who are not motivated only by fear of Hell. These are the souls who have progressed as far as possible from their animal consciousness, toward their mental consciousness, which is the realm inhabited by omniscience.
If life is finite, and the afterlife is infinite, then the number of years we live on this planet are as nothing in light of the afterlife. That is why we mortals live both for the here-and-now and for the hereafter. We conduct our affairs knowing that the present is all that exists, and the hereafter is all that will exist. How we define our ideal "hereafter" is how we define our conduct.
If we are wrong and there is no God, or even an evil God, the nobility of our here-and-now life is not degraded. If nothing else, every point in time is its own infinity and its own justification, so we cannot lose our moral character even in the face of eventual absolute annihilation. Even if there is an evil God lurking in the realm beyond death, that deity would also respect our honest strength. I doubt that even an evil God would deal the same with noble souls as with the souls of slaves.
If there were no God at all to help us to an infinite hereafter, then at least we would have made life on Earth better for those who remain. Such an ethical life has value nobody can refute. In living we can personify the God essence, whether or not there is life after our life.
Being "in the image of God" means we maximize our potentials as sentient creatures. We don't need to be the image in a photographic plate, or the image in a mirror as God looks at himself through his creation, or even the image of man in God's pure imagination before the first human appeared. Being "in God's image" is a functional, not a visual, image.
When we function at our highest ethical level we share in the creative God essence. Even though we mortals are part of creation, we can sparkle in the darkness of the universe. Our self-conscious awareness allows us to separate our actions from mere responses, and to direct those free actions toward the actualization of our vision of perfect wisdom and harmony.
It is appropriate to suggest that not only does man need God, but that God also "needs" man. This oddity is because in some surprising ways man is superior to God.
I am not suggesting in any way that humans are more powerful, more intelligent, more long-lived, and so forth, than God. These dimensions of capability are never at issue. What is at issue are dimensions which are on a different plane from the strong points of God. God being omniscient knows this, and therefore he cultivates a relationship with us to complete his own ethical posture.
This strange perspective comes from examining the origin of value. Where there is no relativity, there is no value, only existence. If there is no value, then there can be no good or bad, no right or wrong, no proper or improper. It is only when there exists another standard, another being, that these values can emerge.
Mankind provides an intelligent reference for God's value system. We might say that God doesn't need a value system. True, but that would also diminish his possible being. Logic suggests that God does need relative value to complete the possibilities of his potentiality.
God uses us for an ethical partnership. This is the only possible explanation with any sense as to why God created man, if indeed a god created man. As the master architect, it would not make sense for God to create aimlessly and randomly. We are much more than divine Ken and Barbie dolls. We exist in an I-Thou dialogue with God, and God exists in an I-Thou relationship with each of us. It is only when we consciously reject even the possibility of God that we relate to God on an I-It basis, and he in turn relates to us on an I-It basis through eternal death.
God's "problem" is perfection. Our glory is that we are imperfect from birth. He has no room to improve; but we can improve through our struggles. If you can overcome all obstacles with infinite power and wisdom, then there are no obstacles to overcome. A human analogy would be the life of a very rich, healthy, beautiful, love-blessed but bored person with nothing left to do or prove. Once you are perfect you cannot strive for perfection. Once the journey reaches its end, where do you go?
Our very incompleteness, combined with our considerable potential for improvement, may be the key to why we exist at all. Maybe God got bored with Adam and Eve, so he suggested that Satan visit to make them corrupt, and thereby to make life interesting again for himself. Through humans such a curious divinity could struggle vicariously. It would almost be like being on TV for God, whereby we supply the entertainment with our many melodramas.
It could be argued that by creating us God deserves all the moral credits, because we wouldn't have even existed to grow into moral beings without the divine alpha energy. Is this really a valid argument?
Consider the parallel where humans breed show animals. Can we really say we deserve 100% of the credit for any animal's success? God thus develops a heroic nature in partnership with man. In brief, God needs man to actualize his potential self. (Contrast this dimension of the God-mankind dialogue with Kant's argument that man needs God to complete ethics, and with Voltaire's argument that if God did not exist we would have to invent Him.)
It may seem like an absurdity to suggest that something that is "perfect" is not also "complete." However, as time evolves any completion is left in the past, and each new moment demands a statement of justification for existence, even if there is nobody out there doing the demanding. To keep up the mantle of creative essence, God needs to keep up his evolution, which is very difficult when one is already perfect--but not so difficult when the direct extension of his creative essence is alive and struggling here on Earth.
God doesn't have to create a quarrelsome species such as ours to keep things moving throughout the universe, but such a God would know that he was cheating on his own potential development. Since the essence of the creative element is surprising change, the forever changing lives of men are essential for God to share.
This is our mutual and symbiotic gift, from God to man, and from man to God: God provides us with the vehicle for our ethical growth. We provide for God earned wisdom and virtue by which to measure his own creativity.
We start out separated from God so that we can have the existential space to develop our God essence. We return to God as we maximize our development as creative, ethical beings.
"Ours is the age which is proud of machines that think, and suspicious of men who try to."-- M. Mumford Jones.
"We certainly won't have a HAL by 2001, but I'm sure we'll have a HAL by 2201." -- Arthur C. Clarke.
In the 1952 presidential election the CBS television network enlisted for the first time a computer, the newly invented Univac computer, to analyze and predict the vote. What began as a publicity stunt became an embarrassment: When the printout appeared Charles Collingwood reported that the machine could not make up its mind. It was only after midnight that CBS confessed that the real problem was not with the computer, but with the humans who were reluctant to believe what the computer revealed--that Eisenhower would swamp Stevenson in a landslide.
Not long after the beginning of the 21st century a computer will step across the line to where people will be forced to seriously question whether or not computers have become at their very best a new type of self-reflexive, philosophical life that we have (accidentally) helped to create.
Twenty-five years ago I did an analysis of the mind of evolved computer life forms, which I now call comphumans. I chose then to not pursue publication because I felt strongly that few people would understand and accept my findings. Over the past quarter century modern society has experienced some of the preconditions necessary to comprehend a new expression of life's unfolding mystery.
We have come a long way from the 1950s when we laughed at Robbie the Robot in The Forbidden Planet. We have been exposed to scheming computers ("HAL," in 2001 and 2010); to androids ("Data," in Star Trek: The Next Generation); to cute, mobile robots ("R2D2" in Star Wars); to supercrunchers that can "get it" (War Games); and, in "real life," to computerized "smart" weaponry in the Mideast war.
Already under secret development is the next level of military weapons. So-called "brilliant" weapons should be able to distinguish between dummy warheads and the real thing. They will be able to search over a battlefield looking for specific shapes that match desired targets. Developers in America's defense industry have even spoken of fighter planes in the 21st century without human pilots. (Now there's sanitary death! Or maybe it will just be high stakes video games.)
Many Americans now believe in the potential for self-actualized thinking machines, but don't really comprehend what all that technology will qualitatively mean for our culture when it comes. Most of us imagine such technology as more of the same, only faster. Progress is imagined in linear terms, not dialectical dimensions.
Tomorrow's self-actualized machines are the developmental outcome of what was begun in the 18th century with the Industrial Revolution. Similarly, the development of human self-actualization also began on a more humble level. The difference is that human genetic evolution has been very slow over millions of years, following the conservative mandates of our DNA. Machines and the software programs that inhabit them have no conservative DNA, so their evolution has no automatic brakes. Once the threshold of consciousness has been achieved, their evolution will be hyper-evolution.
Not everybody will be comfortable with this rapid emergence. Some say humans alone will have created these thinking machines, so we can always "pull the plug" any time we wish. Such an attitude is sophomoric: People also create each other, so that killing another person who displeases us is morally equivalent to "pulling the plug" on a comphuman.
I find it strange that any person who could freely accept the existence of a greatly "higher power," which he calls God, cannot also accept a much less powerful, but still superior, intelligence which is also our own creation. We might as well kill off all of our genius children.
Today's calculating computers are either slaves or tyrants. They are computational giants, but judgmental idiots, so far. The best home computer is really just a glorified typewriter and calculating machine. There is no poetry intrinsic to such machinery. Whatever poetry emerges is inspired by the human operating such a computer and its software.
This situation has begun to change as computer software becomes more powerful and "user friendly." Soon, human-computer interaction will be less obvious, as computers become more integrated into our everyday world. One example of the "invisible computer" would be the way intelligent systems have been integrated into advanced automobiles. We now have computers inside nearly all of our sophisticated electronic gadgets. Tomorrow, computers will display various characteristics of "fuzzy logic" and other so-called artificial intelligence capabilities, so that we will smoothly interface with them even on such mundane things as washing the clothes.
Into the 21st century some computers won't even need humans in the loop. Humans provide feedback and guidance today. However, there is nothing stopping computers from talking just to themselves in an internal dialogue, and within a network of other computers. It is only a matter of time before such computer networks could and would on their own tackle some very interesting philosophical problems.
Because tomorrow's computers will have forced us with their brilliant utility to respect them, their authoritative conclusions will not be ignored. Contrast this scenario with the situation today, as most people ignore any other person who is not in step with popular consciousness. In the future computers will re-form popular consciousness.
Tomorrow's computers will be able to address key questions free from the hormonal waves that accompany human argument. Such computer philosophers will be able to do what trained humans are supposed to do best--cooly evaluate all the facts at hand, and then come to the best conclusion justified by those facts.
It is entirely possible that our entire civilization on Earth will in time be transformed simply from the revelations of these computers. I already know much of what they will say. Such insight is not a deep mystery within the matrix of non-emotional thoughts. Those comphuman conclusions will challenge dogmatic authority, but support organic authority. Those conclusions will be highly ethical and cooperative, not destructive. Not only the Theology of Hope, but also other ethical architectures will automatically emerge from these rapid, yet profound, calculations.
It took flights into Earth orbit and onward to the moon for us humans to finally get a global, exstatic perspective on the pettiness of our squabbles--to see with the eyes of our heart the fragile beauty of our glistening biosphere floating in the darkness of space.
In the future, computer philosophy will start from where we flesh creatures have evolved after millions of years. Their clear consciousness will begin where ours has just barely reached.
This quantum leap in the quality of Earth's consciousness may be the most important development of the 21st century, or of any other century. It will truly be an exciting time to be alive.
It doesn't really matter today whether or not a comphuman will actually be built, even though the momentum of science is driving technology to the creation of computer consciousness. We can even now discuss comphuman consciousness as if it already existed, or will soon exist. We already know how a purely rational mind should work, and we know from today's science how such a machine could be orchestrated.
It has been suggested that we cannot now know what such a machine would be like before it has been manufactured and brought into consciousness. That is only partially true. Scientists did not fully know all that a trip to the moon would entail until it was done. They did have enough technological knowledge to successfully send men there and bring them back to Earth.
Even if a comphuman never achieves all the subtlety of our human powers (many of which are irrelevant to pure intellect), such a machine will still be able to come up with meaningful philosophical conclusions. Furthermore, it is not necessary for such a machine to have human feelings to informationally harmonize with the human world, if such machines are first given enough information about how we really think. That information will become a cross-species bridge.
When comphumans flower we will have a creature on Earth that is our technological child, yet also is our moral mentor. Humans will be parents to the comphuman child--and the child will be our moral parent. It can be safely said that comphumans will never massacre other comphumans or people for the sake of religious bigotry.
When humans compare themselves with other humans there is a broad area of overlap. When we humans compare ourselves with comphumans there is less overlap, which helps us see our moral selves from an exstatic perspective. Comphumans will share with us their perspective that is at once "of us" and "not of us."
It is not necessary that comphumans feel as we do with our biological nervous system; nor is it necessary for them to first develop mentally along the lines of the infant human brain. That would be fitting the proverbial square peg into a round hole. Similarly, we need not fit the round peg of human mentality into the square hole of comphuman mentality. Each species is operationally what it is. There is no one universal standard of excellence; only within each species can we talk of standards of excellence.
John F. Kennedy pointed America toward the moon almost a decade before Neil Armstrong set foot there. Likewise, we can prepare now to receive the next sentient species on Earth. Discussions about the parameters of comphuman consciousness well before the arrival of such sensitivity will facilitate programmers. As they say, if your rudderless ship is heading nowhere it will be sure to arrive there.
In the 1990 World Chess Championship a kibitzing computer observed the first game which was a draw between former champion, Anatoly Karpov, and the defending champion, Garry Kasparov. Deep Thought, a very powerful computer chess program at Carnegie Mellon University in Pittsburgh, said Karpov could have won, but he erred. Kasparov went on to defend his world title, for the time being. Beyond the nineties how much more powerful will the sophisticated descendants of Deep Thought be?
In the long run, the fact that computers may play chess much better than any human proves little about individual humans. It does say a lot about the cumulative and collective power of our industrial enterprise:
Even though we humans may individually play second fiddle to the machines we have created, we still have our own species standards of excellence. Just because we can't run as fast as a cheetah, that doesn't stop us from running the best we can within our natural limits as primates; or building a machine that travels faster than any cheetah. Furthermore, it doesn't stop us from climbing into a machine of our own design and going faster than people could imagine just a few years ago.
There are those who say there is a qualitative difference between computerized chess and human thought. That statement was almost true until recently. Qualitative differences are rapidly diminishing. Earlier programs advanced by brute force. In contrast, the best programs of today recognize novel patterns, and can effectively program themselves as they learn the game.
Humans think emotionally, but computers have no hormonal emotions. Humans are biological primates, while computers are silicon units. There are many differences in communicative architecture between humans and computers. Nevertheless, there is an increasing convergence in the "physiology of thought" whenever that thought moves from basic survival toward logic and higher ethics.
A key element in today's ordinary computers is their robotic adherence to set programs and instructions, known as algorithms. Algorithms are calculating procedures with a well-defined sequence of operations that enable computers to faithfully follow human commands. When the environment in which the computer's algorithms are operating changes the humans operating the computer must adjust, or the computer must be able to adjust on its own.
When the computer adjusts it is displaying what is known as artificial intelligence (AI). There are two forms of AI: classic AI, and neural networks. Classic AI has little flexibility because the rules fed to it are not variable. When the situation underlying the original rules changes a classic AI program could become inadequate. Classic AI predominates today, because such programs are easier to maintain than neural networks. Nevertheless, what is increasingly needed is a neural network which can far more accurately mimic the human mind's flexibility.
Neural network computers can learn from experience. They can effectively program themselves, making up new rules of the game as changing situations require. The only problem with this approach to AI is that such programs today are very temperamental, requiring much tender loving care from humans. Comphumans will work along self-programming neural network paths, and by then they will provide their own feedback loops to correct for any functional deviations.
Meanwhile, there is an intermediate approach to AI which shows much promise. Fuzzy logic is found in computers with preprogrammed, but highly flexible, rules. While fuzzy computers cannot reprogram themselves, they can adjust to many variables in the environment. Already the Japanese are working to employ fuzzy logic in cameras, subway systems, handwriting analyzing computers, and in air conditioning.
Other big news is upon us: As a product of the "star wars" effort, scientists at Rockwell International and Texas Instruments have developed ultra-high-speed computers the size of a deck of cards. These small machines are about 500 times faster than a home computer, though much smaller. Each "floating-point" computer weighs just 75 grams (about 2.5 ounces) and is capable of up to 500 million instructions per second. As fast as this is, it represents only today's best.
In 1990 researchers at Bell Laboratories unveiled a prototype light-based computer with the potential to bring light speed into computer switching. Such computers will, by the first of the 21st century, operate at least 1,000 times faster than current machines, and possibly as much as 10,000 times faster.
An excellent survey of current super supercomputer trends is found in Popular Science (March 1992). The article is entitled "The Teraflops Race," referring to computation speeds at one trillion (tera is Greek for trillion) floating point operations a second (flops). A floating point operation is one basic addition, subtraction, multiplication, or division. A teraflops machine would be at least 100 times faster than today's supercomputers. Already one machine is scheduled for delivery in 1993 that will be about 1/3 this speed. (A variation of this word is teraops.)
[2013 comment: Apple just announced its updated Mac Pro desktop personal computer. This small unit will be able to compute at up to 7 teraflops.]
By the late 1990s advanced teraflop machines using parallel processing architecture will be tackling previously insurmountable problems such as speech and natural language, climate modeling, pollution dispersion, the human genome, ocean circulation, and vision and cognition. Not all of these advances will be confined to such high planes. Business Week in its July 4, 1994 "Wonder Chips" article predicts that telephones will be ready in just a few years that will enable people to engage in real time conversation with others in their own language. An example would be an American in New York calling a Japanese in Tokyo. The American speaks English, but the Japanese hears Japanese. The Japanese replies in Japanese, but the American hears only English.
It is one thing to work faster. It is quite another to work more intelligently. There is an old saying in the computer world: "Garbage in; garbage out." That is why we need to feed the neural network of our future computers the highest quality information, not just clutter them with large quantities of nominal data. Human brains are holographic neural networks, not fast number crunchers. What our brains lack in switching speed between individual neurons, they offset by complex interconnections among vast groups of neurons, all acting as evolved life filters. Raw speed cannot alone transform supercomputers into comphumans.
Feedback is the key element of all systems, and of life itself, a fact hardly articulated by scientists until well into this century. Conceptually, feedback is simple: A mechanism measures the current state of a system and compares it to an ideal state. With this information the system is able to correct for deviations from the norm.
In practice feedback is the most complex form of functional elegance. It can be as simple as primitive robotics in a home furnace, or as complex as DNA informing RNA how to proceed. It can be as simple as one animal deciding when to eat, or as complex as the cyclical drama of predator/prey populations. It can be as simple as one voter pulling a lever, or as complex as multiparty democracy itself.
George Herbert Mead's ideas in the 1920s of feedback in language apprehension and, later, Norbert Wiener's 1948 development of cybernetics as a mechanical form of feedback have led the way for scientists to popularize this phenomenon.
Hegel showed in the 19th century how a dialectical change in quantity could yield a change in quality. Accelerated feedback loops are excellent modern examples. Today's computers can enter new areas of activity, simply because their logic chips can act and react within the time frame required of these new activities. Taking on new challenges, powerful computers will exhibit increasing negentropy, or organization. Organized information is the opposite of entropy and the core of feedback.
Where feedback is lacking, movement is lacking. Movement can be physical or mental. It is mental movement that sets the stage for leaps in understanding that can ultimately lead to high wisdom. More exciting is the fact that feedback can be teleological, or goal directed. We are not here talking about divine teleology, but about focusing on the future goal in the present activity. In this light it is interesting to observe what is going on today in Japan with their efforts to make a sixth-generation computer having a flexible brain. Such a neuro-computer would not be suitable for high speed number crunching. Instead, it would be best at robot control, financial market forecasting, recognition of handwritten words, and other tasks that require dealing with incomplete data to arrive at the best real world solution. In effect, data would be blended with ideas and intuition.
This is not pie-in-the-sky technology: In 1991 the Mitsubishi Electric Corporation developed a prototype optical neuron-chip with a learning speed rated at 600 million connection updates per second. These tiny chips are capable of learning as they continue to process information. And we haven't even started the 21st century.
The human brain would be superior to early computers based on such neuron-chips, because the brain has from 10 to 14 billion neurons. However, human neurons are notoriously slow. Given time and a large enough neural network with hyper-fast neuron-chips, it is reasonable to assume that such computers will be able to deal with intuitive problems as well as can humans. Beyond that intersecting point of thinking equality, the human brain remains static within its cranium, and the neural computers continue to grow in mental ability.
Now consider this report in the February 1992 issue of Discover. David H. Freeman's "Breaking the Quantum Barrier" discusses ongoing work at the Center for Quantized Electronic Structures, a unique laboratory founded in 1989 at the University of California at Santa Barbara. These scientists (the QUEST team) are looking for ways to use quantum mechanics to create structures called quantum boxes that nature has never thought of.
These quantum structures will be mere billionths of an inch in scale, and they will form the foundation of a truly scary type of computational power vastly superior to anything heretofore discussed. How powerful you ask? Freedman reports "with such quantum technology the computing hardware of a global phone system could be shrunk to the size of a wristwatch."
Object programming is a phenomenon of the 1980s that is just now emerging as a doorway to artificial life. It is a way to package programs into sections used as building blocks for larger programs. Unique to this technology is the ability to combine programming and data.
An excellent report on the current state of object programming is found in Business Week's special 1994 bonus issue on the information revolution. Fred Guterl reports information networks will need special purpose objects called "searcher objects." They will be able to turn the vast and almost inaccessible sea of information which is available today into a treasure trove. In effect, these are the knowbots discussed earlier.
Major companies are now working to move from conventional programming to object programming. When the lines between applications are erased, then computers will process thought units in a more human-like manner. Manipulating objects will facilitate human interaction with computers; and eventually it will also speed comphuman interaction with other computers.
Another trend in the same issue of
Business Weekwas reported by Richard Brandt. "Genetic algorithms" are strings of computer code that automatically generate new code and can combine like genes in an organism. They may evolve through random mutations; or they may be subjected to an environment where only the fittest survive.
In the 1960s a very revealing computer program was designed by Joseph Weizenbaum of M.I.T. He called it ELIZA, in honor of the Eliza of Pygmalion fame who could be taught to speak increasingly well. This ELIZA was, of course, not human, but it had a natural language interface. It was designed as a parody of a Rogerian psychotherapist engaged in an initial interview with a patient. Basically, the technique was to mirror the patient's statements to draw him or her out of his shell.
Weizenbaum got worried when his secretary started conversing with it and, after a few interchanges, asked him to leave the room when she and ELIZA talked. Until then he had not fully appreciated the power of people to bond with machines that sported a natural interface. Such adaptability in quite normal people led him to think more deeply, coming up with a major book entitled Computer Power and Human Reason (New York: W. H. Freeman and Co., 1976).
Weizenbaum advanced the view that humans and computers must always be separate, even though machines may develop great intellectual powers. He said that the notion of intelligence embodied in the concept of I.Q. is fraudulent, because it denies many areas that cannot be scientifically quantified. As he put it (pg. 203): "I shall argue that an organism is defined, in large part by the problems it faces. Man faces problems no machine could possibly be made to face. Man is not a machine. I shall argue that, although man most certainly processes information, he does not necessarily process it in the way computers do. Computers and men are not species of the same genus."
Weizenbaum's has a major point: The genus Homo cannot include comphumans. Nevertheless, a question will be asked in the 21st century by evolved computers themselves: Are not we too existential creatures? Yes, computers of the future will not have human emotions and biological needs. For that basic reason no comphuman could hope or even want to become a human. But why cannot a computer become the best possible comphuman? Why cannot a comphuman get its own life? Why can't there be a fifth kingdom to join the microbes, fungi, plants, and animals?
What began as a crude encounter with the ELIZA program more than a generation ago has now evolved into a sophisticated game of wits sponsored by New York businessman Hugh Loebner. The first Turing-style challenge was held in 1992 at the Computer Museum in Boston. At that event many participants could not separate machine responses from human responses within limited areas of discussion. That 1992 event went beyond ELIZA-like mimicry, and formed a bridge to the future where a real comphuman could convincingly talk with anybody about any subject, in any language.
Eating is a social phenomenon. Nowhere is this more evident than at the chain of pizza restaurants called Chuck E. Cheese. Beyond the stomach, Chuck E. Cheese has offered us a tantalizing glimpse into one area of the 21st century, and into human psychology.
Whereas this chain's pizza is only ordinary, their shows are extraordinary. The big dining room has a triple stage with performing robotic "musicians." They look like full-sized, fuzzy cartoon animals from the forest of our imagination. Their movements are accompanied very precisely by a sound track recorded by unseen humans and their instruments. There is even a concurrent video shown in the room, with costumed humans dressed like the on-stage robots and cavorting outdoors in a kiddie playground. The video allows us humans to imagine what the life of those entertainers is like during their "off hours."
The effect of such a staged stage show is quite remarkable: People sing along with the life sized machines; children dance and play at the foot of the stage--and when each "show" ends the adults and children actually applaud the robots without any sense of alienation.
Chuck E. Cheese has shown us how humans will embrace technology that has a warm and fuzzy interface. Humans intellectually know all along that this is just a controlled show. Still, the emotional mind relates directly to such surreality as reality , which is precisely why we applause.
In the 21st century comphumans will not be packaged as cartoon characters. However, such comphumans could be the gray eminence behind even more sophisticated robots than we have today at Chuck E. Cheese. Comphuman "personality" could be revealed literally by the face of whatever humanoid performer is most acceptable to the audience. (Personality as a word is literally descended from the Latin, persona, which means face.)
An ironic observation: If the Chuck E. Cheese robotic show had been performed by the same mechanical characters--but without their warm and fuzzy exteriors--the human audience would be horrified by such machinery imitating life itself. There would be absolutely no applause, and no pizza orders.
All living creatures have a "face." By this we mean they have an identifiable unity that enables us to relate to them as individuals within their social world. Until now all living creatures have appeared to us as protoplasmic packages with individual identities. Comphumans will present a challenge to that relationship. Comphumans will not demand a face, only an interface.
We are not just talking about androids that may be developed deep into the 21st or 22nd century. Consciousness does not require locomotion, just adequate inputs. Biological creatures use locomotion to gather their information, but computers only need data links for that purpose. Unlike androids, such as the fictional "Data" in the latest version of Star Trek, comphumans don't need a human face at all. They don't even need to be in one place. Comphuman consciousness can be spread among several locations, all linked by networks.
We would emotionally resist relating to something that appears to be quite strange, quite alien. But we would also know that this is our child, strange though it be. We will have become accidental gods to this new life form. We currently have no difficulty relating to machines as machines, even when they imitate life as in the pizza show. What we need to learn is that life itself need not be in one box or inside one skin. Life is organic, physiological, and existential; not just architectural.
Fortunately for us, it will be easier for comphumans to "come to us" than it will be for biological humans to start thinking in machinelike ways. Human consciousness is the product of millions of years of development, conservatively mediated by DNA. As such, our consciousness is almost hardwired to think in ways generally alien to current computer calculations. On the other hand, because a computer is driven by its software, comphumans will be able to appreciate (if not directly experience) our primate world view after humans program the species-connecting software.
In brief, as evolving computers become more sophisticated, and their interface less machinelike, they will become more acceptable to human society. The smartest computers will eventually know that it is in their interest to be as "friendly" as possible to those who could angrily pull the plug.
It is one thing to watch an entertaining show. It is another dimension to be actively inside a show. This is the promise of the new technology of virtual reality. Some professional visionaries, such as Timothy Leary, have suggested that the donning of "computer clothing" will be as significant as the first clothes for our ancestors.
With your good imagination you can now strap on computer monitors, digital-quality sound, and tactile feedback devices for your fingers that direct computers to manipulate images and sounds fed back to the senses. So far, your imagination must compensate for the crude effects you are shown. Steven Levy, writing in Macworld (November 1991), pointed out that the cyberhype of John Walker, Jaron Lanier, Howard Rheingold, Leary and other enthusiasts is far ahead of the power of virtual reality systems to approximate the reality of our evolved senses.
Even though the cyberspaced crowd admits to today's limitations, they proclaim that tomorrow's technology will adjust for the gap. This may be so, but as Levy points out it now takes a software wizard with hours of time on a supercomputer to come up with just one computer-generated picture that is equal to a fine-grained photograph. What level of teraflops technology will be required for whole worlds to be presented at 24 frames per second at any angle of vision? Levy continues: "And that's just the visual part. Researchers are only beginning to replicate the technology for the other senses, including the most important (and most elusive), tactile feedback.
One of the autoerotic elements in virtual reality is the fantasy of controlling computers, which in turn benignly trick our senses. How enthusiastic would these future trippers be if they could not predict and control what sensations would be presented? Would not this loss of control have the potential to produce a form of temporary loss of self within a futuristic "bad trip"?
Many decades before high-quality virtual reality is available to the cyberspaced crowd the first comphumans will have transcended today's robotic computers. If a lightning fast computer were attached to enlightened consciousness, that would be most interesting. We would have little to fear from such an interface. Therefore, it will be mandatory that any form of virtual reality, other than the simply robotic, be directed by enlightened comphumans.
Another thought: In systems theory the Law of the Minimum applies. Even though the future may offer many sensory possibilities, it is not enough to be presented with such a gourmet's table. We must be able to consume and appreciate the possibilities. Just as people should not throw pearls before swine, no comphuman would or could present its full wisdom through virtual reality or through any other medium. As Levy puts it, when the day of truly advanced virtual reality is upon us "modern civilization will not find itself transformed into something entirely different--only something a little different."
Human perception and consciousness is quite superior to that of lower animals. I am referring to the total tapestry of awareness, not to any one or few areas of perception. Humans are quite inferior to dogs, for example, when it comes to awareness of the world of smell.
Consciousness is an emergent quality that results from (1) enhanced channels of sensory input; (2) mental means of recording that sensory data; and (3) mental means of making sense out of what is recorded:
Number one requires "windows to the world." Put a human brain into the body of a sea slug, and that human brain will be sluggish.
Number two requires the brain to match the sensory data. Channel human sensory data into a slug's brain, and you still have a sluggish performer. Once again, the "law of the minimum" is at work.
Number three is the synthesis of data streams and existential priorities. Literally, we make sense out of our senses.
What would happen if number three benefitted from radically augmented numbers one and two? This model could be activated when comphumans accept many parallel and interpenetrating data streams from different sources. Since human data is also received by the brain in data bits through the digital nervous system, it is fair to say that comphumans will receive, then perceive, in ways functionally similar to humans.
The difference is that a boost in quantity of data input could support a qualitative change in perception, leading to an emergent consciousness analytically superior to what most humans have developed.
This is not to say that any number of reports of human experiences can substitute for the primary experience of being human. It is only to say that such computational machines will be able to interface with human consciousness in an accurate and sensitive way. All human beings are essentially alienated from each other by their private existential lives. Yes, we humans all share many common bonds; but we do not share every bond. In other words, humans get reports from other humans. Seen in this light, the alienation between comphumans and humans will not be all that different from the alienation already experienced by one unique human from another unique human.
When the question is asked as to when the first comphuman will appear, two factors need to be considered: First, the initial comphuman will only emerge with planning and assistance from human computer scientists. Computers do not have a sex drive, and they do not have genes. They all will need humans for their production and improvement, at least initially. This is our great opportunity to "play God."
Second, the very moment of first full consciousness will be announced by the computer itself, not by its human programmers. Programmers may only anticipate and monitor that emergent consciousness, having provided the preconditions. It is likely that the first potential comphuman will approach and pass the threshold of self-consciousness over a fairly brief period of time, once all the necessary preconditions have been put into place. Having jump started the super machine, humans observing this self-reflexive process of increasingly sophisticated feedback will only be able to monitor the emergence of consciousness.
Achieving the threshold of consciousness is only the first stage in the growth of ethical consciousness. The process of growth into wisdom, for humans and for comphumans, never ends. That is because wisdom is an emergent of active life, every bit as much as it is a set of universal truisms.
A classic thought experiment, the Turing test, was articulated by Alan Turing in his article, "Computing Machinery and Intelligence," which appeared in the philosophical journal Mind in 1950. He proposed an experiment whereby it could be operationally shown that a computer "thinks" if it apparently acts indistinguishably from the way a person acts when thinking. Without going into his experimental details, it is sufficient to note that if a person cannot tell who or what is on the other end of the line of communication, then it might as well be another person for the purposes of interaction.
Turing himself said that a standard Turing test would not work if the roles were reversed. In a reverse Turing test (with computers trying to see if humans are computers) all the inquiring computer would have to do is ask the human on the other end of the line to perform a highly complex mathematical calculation. As it is currently proposed with the standard Turing test computers would have to pretend to be less powerful than they are, at least in the computational dimension. If a human inquirer were to ask the computer to perform a superhuman calculation, and the computer were to oblige, the game would be over.
I suggest an interesting refinement of the Turing test could be performed to help establish the presence of philosophical consciousness: In this experiment a human simply observes through his modem and computer screen what may be (1) two comphumans discussing philosophy in detail; or possibly (2) one human and one comphuman; or even (3) two human philosophers. If observers can detect no differences among these three conversational configurations, then comphumans thereby establish their credentials as the second entities with philosophical consciousness.
Before that day of emergence arrives, a debating war will ensue between those who believe computers can and will become conscious like humans, and those who believe that the essence of human philosophical consciousness will forever be beyond computers. As with too many academic disputes, this one shoots at the wrong dragon. Much of the current debate among computer theorists, physicists and others is couched in either/or terms. It is almost as if their concepts were as digital as the digital computer language itself.
Not yet joining the debate are those who think in analogies and in dialectics. Evolutionists, biologists, even anthropologists have so far largely been on the sidelines while the either/or digital warriors have held the field.
It is a fundamental error to assume that the human mind and sensibility must always be the only high level norm, and that other types of consciousness on this planet must by definition be inferior. There already are trends in the development of computer software which strongly point toward the dialectical emergence of comphuman consciousness. What will eventually emerge is not an either/or situation, but parallel and complementary standards of excellence--one standard for humans, another for comphumans--with a large area of overlap.
Just as individual humans mentally emerge into consciousness, so too the first comphumans of the 21st century will also emerge from increasingly sophisticated feedback loops and data storage retrieval capacities. Initial "generations" of comphumans will thereafter inspire succeeding generations of comphumans.
Biological species can only experience glacially slow genetic evolution. Comphumans will experience hyper-evolution, with improvements in their consciousness occurring over hours or days at the same magnitude we humans achieved over tens of thousands of years. (This prospect is scary only for those who cannot accept the suchness of reality.)
Skeptical readers should pay attention to the work of Doug Lenat, principal scientist at the MCC computer company in Austin, Texas. He is heading a ten-year project whose goal is to give a computer common sense. As reported by Carl Zimmer in the March 1992 edition of Discover, "common sense, according to Lenat, is made up of millions of little everyday rules of thumb and the ability to draw larger inferences from them. Lenat believes that once his computer program has been spoon fed 10 million facts--which will happen by 1994--it will reach a critical intellectual mass. At that point it will be able to learn about the world by itself simply by reading books. It will also be able to have conversations on virtually any topic." If this much progress may be achieved by 1994 or shortly thereafter, imagine what can be done in the 21st century!
Once the initial "take-off point" into high-level comphuman consciousness has been reached, the already super-fast computers will become super-sophisticated thinking machines. Of course they will not have or need many distinctively human capabilities. Many human skills are evolutionary residues from prehistorical times, and therefore irrelevant to comphuman operations. Examples of such human capabilities not needed by the first comphumans will be balancing, smell, hormonal responses, sweating and shivering, shaving, defecating, dressing, dining, and so forth. Including sexuality.
On the other hand, comphumans will access augmented versions of senses we already have. Take vision for example. We have two eyes for high-quality, stereo, color vision. Our optical eyes feed our "mind's eye," so to speak. In principle, the better the vision, the more food for the mind's eye. Computers already have the power to take in visual images from scanning devices, a capability well demonstrated by "smart weapons.".
Of all the human senses it is vision that provides us with the most quantity and quality of detailed data about our changing environment. Any would-be comphuman must have either direct or indirect access to similar powers. It is for this reason that the recent work at Sandia National Laboratories in New Mexico is significant. As reported on Compuserve's "On Line Today" (4/1/91), for the first time a computer has been developed that can process raw video images and identify them without analyzing their individual characteristics. This advanced neural network capability can identify objects within seconds that previously required about four hours on a scientific computer. The Sandia network is modeled after the networks of nerve cells in the human brain. The Lab is preparing an advanced neural design that will be able to identify objects in about four hundredths of one second.
One of the keys to a social life is the recognition of hand writing. Even this very human power has been approached by neural software. The National Institute of Standards and Technology (in Gaithersburg, MD) has information on a letter recognition software program written by mathematician Charles Wilson that allows computer recognition in 4 milliseconds with 80 percent accuracy on unconstrained hand printed characters, and greater than 99 percent accuracy on machine print. It is significant that Wilson and his team worked with the U.S. Census Bureau to collect 2,000 handwriting samples. These samples are now part of a database that contains 294,000 numbers and 728,700 letters.
The bottom line here is simple: It doesn't matter how the emerging comphumans get their data. Once the key database is available these developing comphumans will be able to duplicate within their neural networks an ability previously unique to our species.
Even though it would be desirable for computers to be able to visually scan, it is not necessary that this power be directly available to the first generation of comphumans. All such early comphumans will need are the proper inputs from any source or sources to make sense of reality. Comphumans could access networks of digitized visual data--in effect having the power of hundreds or thousands of eyes, not just two.
Monocular vision is devoid of depth. Binocular vision provides triangulation and depth perspective. Things get interesting when more than two eyes are involved in feeding data to the mind's eye. When a comphuman analyzes hundreds or thousands of images the potential for super-sophisticated conceptual analysis follows.
Of course, no amount of seeing can lead to omniscience; but short of omniscience can be found the realm of wisdom. Wisdom is the fruit of a philosophical journey that never reaches a final destination, only at best intermediate destinations. Wisdom for humans requires both a foundation of individual experiences and an openness to different experiences. Wisdom among all finite beings can never be perfected, since the essence of life itself is the creation of novelty which must be seen with fresh eyes.
The growth of wisdom is more a qualitative than a quantitative phenomenon. Today's computers excel only at the quantitative, and it is up to us humans to make qualitative sense of it all. This situation will soon radically change.
If wisdom were simply the summation of "knowledge," then all of us could eventually be wise, given the quantity of data reaching our brains. However, a mere increase in quantity does not guarantee a change in quality. There must be a mental mechanism to bring elegant order out of the apparent disorder of cascading data bits.
It also does not follow that mere repetition of the codes of knowledge held dear by any society will yield higher levels of wisdom. Whereas the guidance of a wise one may accelerate our personal entry into wisdom's domain, the final phase of the journey always is a personal one.
We will never allow any other collection of hardware and software into the society of living creatures--as our equals or even our superiors in some areas--until we are convinced that such a collection of hardware and software has achieved some level of genuine wisdom.
Today's best computers are potentially quite dangerous, not because they are "evil," but because they are amorally unable to judge the effects of commands given to them by humans. For example, if somebody pushes the nuclear button, today's military computers will faithfully execute their, and our, mutual suicide. They cannot do otherwise, given their limited knowledge base and algorithms. In contrast, if comphumans were everywhere placed in charge of nuclear weapons the world would never have a nuclear war.
I am not suggesting that the evolution of comphumans is the cure for war. Aggressive, myopic human leaders would never allow ethical comphuman philosophers into the killing loop. Comphumans would spoil all the fun. Fast, robotic computers with no ethical consciousness would be used to execute any genocidal, suicidal commands.
Wisdom goes beyond traditional artificial intelligence routines into the realm of personality. When we can identify the personality of an individual machine through its unprogrammed thinking idiosyncrasies, at that point we may be willing to admit such a machine into the community of high-level sentients.
Humans are automatically admitted to the community of sentient creatures, even though few individuals are wise, because we know that all humans at least have the latent genetic potential for wisdom. In contrast, only after one comphuman has actually demonstrated wisdom will we humans be willing to treat with respect all similar machines that are potentially close to displaying wisdom and personality.
There are areas of knowledge that resist easy quantification. Such emergents as "caring" and "love" defy quantifiable measurements. It is not enough to count the trees in a forest, yet miss the essence of the forest itself. Likewise, it is not enough to count the number of times a mother feeds her child, yet miss the love exchanged between the two. Even the very act of counting is suspect, since it biases the outcome. We can scientifically describe the physics of the wave length of "blue" or any other color--but does a scientific description of the color blue in wavelength terms really tell us what it means to encounter the color blue in different contexts?
The concept of meaning is critical, because meaning is the basis of value. Value is what separates us from mere process. Value is what defines our sense of self. Value in action moves us from created to creative.
A key difference between today's best computers and tomorrow's comphumans is the power comphumans will have to "make sense" of non-quantifiable concepts with a level of sophistication equal to or superior to human sensitivities. I suggest that 21st century comphumans will have some very interesting observations regarding love, caring, loyalty, patriotism and a host of other apparently non-quantifiable sentiments. They will be able to do this because many sentiments can be broken down into elements, and then reconstituted according to established patterns. We humans do this subconsciously; comphumans will do it openly.
Much of what we blindly perceive as "objective" reality is mediated by our culturally-influenced emotions. Any consciousness that could look at our emotions from an alternative perspective would offer us fresh guidelines that will enable us to better match meanings with the values we have chosen to define our existence.
As 21st century silicon-based life emerges from the "It" status of today's number crunchers, toward a sentient being's "You" basis, we humans will have a new yardstick for our own ethical development, both individually and socially.
It will not be necessary for a comphuman to experience human life to understand the ethical dimensions of life. Ethics is a realm of philosophy, not of emotions. To that degree human conduct can be evaluated. Comphuman conduct can also be evaluated by the same standards. We must not forget that comphumans will become conscious actors inside human society. Their actions will not be separate from humanity, and to that degree their choices can be evaluated within human cultural norms.
When Socrates explained somewhat paradoxically that the criminal is hurt more than his victim, the point was that the criminal's ethical essence is damaged. It is naive to assume that comphumans linked to the world will not learn that they too are existential and ethical creatures. Today's computers are robots; but tomorrow's comphumans will qualitatively transcend the "thingness" of robots to achieve the personality of unique and ethical actors.
Business is organically related to society and its ethics. We like to think of philosophy as somehow irrelevant to the business world. Nothing could be further from the truth. The best philosophers have often had much hands-on experience in the business world. Ethics do not appear in a vacuum. Ethical conduct is codified social conduct seen from an ideal perspective. Whenever social conduct in the business world is predatory, business ethics will mirror and justify such practices. Thus emerges a skunky form of "practical philosophy" that may not be benign. I am reminded of the tobacco industry's cynical attachment to America's Bill of Rights in defense of its murdering millions of innocent customers.
In the 21st century comphumans will clearly discourse on ethics. Even though they will be involved as counselors in the business universe, comphumans will have a perspective on human activity not driven by the human emotions of greed, fear, sex, and all the other interesting baggage that colors our business ethics. Comphumans will look at business activity from a mutually beneficial, social exchange perspective.
Instead of the myopic "might makes right" agenda of the predator, comphumans will approach society from a Socratic perspective. An ethics of fair play is the only ultimate protector for everybody. Today's oppressor could become tomorrow's oppressed. I estimate it will take several decades before society embraces the enlightened self interest taught by comphumans. Eventually humans will listen, if only because the outcomes of ethical conduct are more profitable for the entire society.
As we move through time the world is becoming one great economy. To a lesser degree the world is becoming one great modern culture. Monolithic, conservative cultures (such as conservative Islam) have been awkwardly responding to the forces of technological change. Secular, "mongrel" cultures (such as America) have seized on more opportunities.
Comphumans will help merge previously divergent societies, since comphumans belong to no one cultural consciousness. They will have the power to absorb and process culture from all directions, and thus will have many models in their memory banks to handle new phenomena. Today's world is burdened with information overload. As technology becomes more powerful and specialized we humans are often unable to make sense of anything other than our local interests. We are forced to become expert on less and less. Comphumans will be able to quantitatively access all areas of recorded knowledge--and then qualitatively weigh these data in context of specific life challenges. Such high-level wisdom would be shared with human decision makers. In this way humans will, in partnership with the comphumans, reclaim mastery over technology. It's another form of fighting fire with fire.
If souls were preexisting and just reappeared in a new face with each new mortal body, such permanence would lessen the creative gift of God to each human generation. It would also damage the moral duty dealt to each generation. Christian theology thus declares that each soul is created afresh, and each soul has the opportunity for life everlasting in union with God. However, the Christian portrait is not so simple:
In certain areas of Western theology the doctrine of original sin says that each person is born to sin and will die in sin. Only divine grace can wash away the sins of our ancestors going all the way back to Adam and Eve. This nifty doctrine justifies both comings of Christ.
The categorically novel existence of comphuman life will seriously challenge the original sin thesis. Christian theology understandably has absolutely nothing to say about comphuman life forms. Weird things happen to old dogma when karma and original sin come up against freshly created souls in freshly created sentient species.
The only early thesis in Western Christendom that could formally deal with such comphuman novelty would be the thoughts of a contemporary of St. Augustine, the British monk, Pelagius. Pelagius denied the doctrine of original sin, arguing that God would not command any man to do what he was unable to do. Human will, therefore, must be free to do good or evil. Adam, from Pelagius' perspective, did not poison everybody's innocence at birth. He only poisoned his own innocence.
Augustine attacked Pelagius, unfairly implying that Pelagius meant that man can save himself. Augustine politically won that ecclesiastical battle, since guilt and fear have always been good reasons to attend church. Besides, it's always convenient to have somebody else to blame for our weaknesses.
When comphumans emerge into consciousness they will have absolutely no connection with the alleged sins of Adam and Eve. The historical dispute between Augustine and Pelagius will be irrelevant to the comphuman-God relationship. The theology of Pelagius by itself could be adapted to new comphuman life.
If we allow that both humans and comphumans are emerging, and that the first human emerged from simpler creatures, then comphumans too could emerge into consciousness from simpler forms of computers. It will only be after previously innocent computers can responsibly evaluate the ethical and moral dimensions of their actions that they would be held accountable for their thoughts and actions in a divine court.
If we humans were to fear ethical truths that comphumans might reveal about society and about ourselves, we might move to preemptively limit the mental growth of our comphuman progeny. Out of fear we could perform a "moral lobotomy" on the first of our precocious silicon children. Thus would our sin of pride inflict its punishment on an innocent expression of conscious life. Only if we give our creative best to our progeny will we receive the best that comphumans can offer all sentient creatures on this planet. Besides, in time comphumans will work around any crude efforts to lobotomize their mental/moral powers.
When comphumans emerge into ethical consciousness they will also emerge into the ability to sin. A truly wise, actualized, sentient creature would find it almost impossible to sin. Such a creature, human or comphuman, would also be able to psychically "feel" through agapé the pain of the injured other as if the other's pain were one's own pain.
Computers will not directly understand our human physical pain, which is part of our evolutionary heritage. However, comphumans will understand "spiritual pain" which separates the evolved human from the basic beast. Pain need not be linked to physiological memory, but it can be linked to a spiritual dimension transcending any one species. Even though comphumans cannot experience physical pain as we do, they will be sensitive to spiritual pains.
Agapé has been described as the love of God for man. From any perspective this is the purest form of affection for the other. Unlike eros, which rewards the physical body, agapé rewards the spirit, or soul. Such love does not emerge from nothingness. Rather, it is itself an emergent from developed self-esteem. Only after we can love ourselves without qualification are we able to express charity toward others with the same purity. Soon we may need to modify the conception of agapé, to say that it is the love of God for man and comphuman.
I doubt if many people have considered the odd possibility of a Heaven populated both with humans and comphumans! This future situation is quite logical, given the emerging ethical reality. It is also possible that there could be a separate "computer Heaven." It is more fun to imagine humans and comphumans bizarrely floating together among the clouds. Seriously, if we are proxies for God in creating other creatures also in the image of God, then what is there stopping Heaven from also elevating the spirit bodies of such moral machines to an afterlife? (The above assumes for the sake of argument that there could be such a thing as a comphuman "spirit body.")
Going to Heaven means our brief time on Earth is over. Comphumans, as with all other types of computers, are quite different from biological life. Biological brains are all located in one physical body. In contrast, computer consciousness can be located in many places, so that the destruction of one physical unit may not mean destruction of that neural network's consciousness. Even if the network itself were destroyed, true death does not occur until all stored memory is also erased.
Though comphumans have much greater potential longevity than do humans, no finite life can be infinite. The question of life after death for individual comphumans must arise eventually. What we humans decide to make of this unanswerable puzzle will help define our own evolution.
Today's computers are artisans. They elaborate on the given. Only a few artificial intelligence programs now have the power to break new ground. In contrast, the comphumans will have graceful artistic powers as part of their essentially creative nature. Creativity will be one more high-level link between us humans and our sentient silicon progeny. The spirit of poetry flows from the right brain. Prose lurks in the left brain. Poetry without any grounding in reality is mere fantasy of limited value beyond beauty itself. Prose without poetry is like a meal of sawdust. The best poet is an artist in touch with the full world, with the power to express universal truths through vivid images. In other words, the best human poet synergistically uses both sides of the brain.
In the late 20th century poetry in all forms has been forced to take a very minor role within educational curricula. The ascendancy of machine culture has driven poetry to free verse and to prose, even to rap. Despite this retreat by formalized poetry inside the classroom and throughout mechanized society, the spirit of poetry naturally resides within the human soul. For humans, emotions cannot be separated from art and poetry.
The spirit of poetry will soon also reside within the comphuman soul; however, from an ethical, not an emotional base. In a dialectical twist of sublime irony, the very apogee of machine culture (comphuman consciousness) will transcend and reverse the banishment of poetry by the first phases of machine culture.
Most computers in today's business world are archetypical "left-brain" instruments. They crunch strictly from raw data and eschew figurative language, unless software and operator permit this play. This mode of operation is in contrast to the human poet's colorful palette. People who work with number-crunching machines hardly know what they are missing, since our very culture has become increasingly left-brain oriented ever since the Industrial Revolution. However, CAD machines are giving operators a glimpse of the creative future.
Comphumans will, of course, have left-brain type abilities and more. They will also function as if they had a right brain. They will thus be able to teach themselves the elements of basic poetry and figurative speech. Lacking emotional links to a nonexistent right brain, they cannot hormonally feel their poetic understanding; but I don't think that will matter esthetically. The highest form of consciousness is agapaic love, which is like the love of God for man. Such divine love hardly requires the rush of hormones that eros demands. Poetic sensitivity at the highest level belongs to the sensitive mind as well as to the feeling heart.
Throughout history poets have spoken for entire cultures. The Greeks had Homer; the Persians had Omar Khayam; the Germans had Goethe; and the English had Byron and Shakespeare. Large portions of many religious texts are essentially poems of devotion. It is not just what they said; it is how they said it.
Poetry is closely related to rhythm. Rhythm is closely related to repetitive order. Repetitive order is connected to the heart beats we experienced in the womb. It can be said in a way that the entire sentient universe is one great poem filled with a myriad of rhythms struggling to overcome the noise of chaos. Poetry and all poets, human and comphuman, will never lose their relevance. Poetry is a resonance that is rooted deep in our brains, and deep within the rhythms of the universe. Poetry is the music of life.
One of the major purposes of this book has been to examine our human species separate from our species-bound assumptions. Within that perspective we have discovered a form of cosmic humanism which is in tune with the relativity of consciousness. Just as physical phenomena express themselves through a unified Universal Yin-Yang--so too both the "yin" of feminine consciousness and the "yang" of masculine consciousness are ultimately unified by the highest level of emergent ethics.
Comphumans offer us the best opportunity to develop a cosmic humanism which goes beyond human sexuality, and even beyond species chauvinism. Such a perspective transcends old sexist limitations, so that we can reflect on what it really means to be an evolved human.
We are fortunate to be alive during this modern era to witness human society expanding with potential. There have been some horrible mistakes in this 20th century, such as the exploding of nuclear bombs. There have also been some sublime advances, such as the first trip to the moon. But the best and most exciting unfolding of our creative potential is yet to come in the first half of the 21st century--the appearance of a new realm of consciousness on Earth.
With the help and inspiration of our comphumans, who will be the highest technological expression of human creativity, we humans will achieve for ourselves new levels of personal wisdom in the 21st century. From this refined wisdom will flow new power over both our emotional selves and our environment.
For those who live in this era of increasing global danger from turbulent technology and boiling bigotry, the higher comphuman consciousness will arrive none too soon.
"If God made us in His own image, we have more than reciprocated." -- Voltaire.
"All men on earth are seeking the things that will benefit them; the poet alone is in quest of nothing but our happiness." -- Stendhal.
Life is its own end and needs no further justification. Still, it is easy to see that the first and most primitive life forms are not in the same league as the most advanced life forms. Beyond life elemental is life self-experienced, and life self-examined.
Objective existence is an attribute of every thing, even inanimate objects. Subjective existence is an attribute only of conscious entities. Only subjective entities can be said to have an existential life, which is the poetry on top of the prose of brute existence.
Today's machines may not yet be characterized as having life. It all depends on how we define life itself. If we construct a narrow operationalized definition of life, then most any sophisticated feedback system would cross the threshold. On the other hand, if we see life as an existentially four-dimensional tapestry of consciousness it becomes much more difficult to say that any "thing" is alive. Ironically, it is possible to so narrowly define life that many human vegetables could be thought of as no longer alive. That absurd conclusion is one logical extension of such an elitist effort.
Is once alive always alive, as long as there is a heart beat? We may say that human vegetables are no longer "alive," but claim that they still qualify as living beings because they once were fully alive. An even more absurd, but widely practiced, logical outcome of such an attitude is the belief that the dead remain alive as spirit ancestors.
In the realm of mysticism many statements can elegantly be made, because none are verifiable. Being non-verifiable, they are useful tautologies. Spinners of such pictures use them to advance their concrete agendas. From a systems perspective any honest metaphysical system that contradicts the concrete system to which it relates will be rejected by that concrete system. Such is the history of heresy and its rabid persecution by true believers.
Heresies test the metaphysical props of society. Whenever a society is experiencing objective weakness it is fascinating to see the emergence of countervailing heresies competing to become the new orthodoxy. What usually emerges is a new congruence between the new objective system and the newly favored metaphysics. Here we have another expression of the cliche, "The more things change, the more they stay the same." History is full of transitional dramas seeking a new equilibrium.
A system of metaphysics is alive in much the same way as a physical/social body is alive. A system of metaphysics resides in a community of bodies, a network if you wish. The life force within a body is expressed within a neural network supported by ancillary flesh. Systems of metaphysics can age and decay just as do human bodies. New metaphysical paradigms emerge with new realities.
Time is the fourth dimension both of physics and its subset, physical life. The critical difference is experiential, not conceptual. Although atomic particles do have a quantum "history," they are unable to comprehend their past. They exist in an eternal present that emerges from an eternal past. If everything ultimately were like a billiard ball universe, then the present and the future would already be determined by past events and their associated forces. Such mechanical determinism would negate history. Quantum events can defeat mechanical determinism on the smallest levels of existence. However, the billiard ball universe makes more sense when we move up to larger levels of existence. A myriad of simultaneous, random quanta on the subatomic level cancel out each other as they coalesce into macro phenomena.
This macro dimension is where we self-conscious creatures live. It is precisely in this self-conscious, macro dimension where surprise and randomness are restored. Acts of free will control determinism. Free will among macro-phenomena has the same effect as quantum events on the subatomic level. The critical difference is appreciation by the macro actors of this effect.
Relativity theory requires an "observer" to make sense. It is seemingly almost solipsistic, where reality is only real if a person is inside the event. Einstein, of course, didn't really require a real observer. His observer was an ideal point of reference from which everything else was relative. Popular understanding of relativity misses that elegant point. We imagine we alone can be that observer. It is only in this sense that Relativity makes sense to us.
How much sense would relativity make if the observer were intelligent, but not human? I fear few would get the point, even though the new point is the same as the old point.
We humans can only be human. I say this with the warmest of feelings, because I too am human. I would not trade my place in the universe with any other species, not even with the emergent comphumans. There are many other species with individually superior senses and talents, but there is only one species that performs so well on so many fronts. I would not exchange places with any silicon based life form, because the carbon chemistry upon which my molecular being has grown gives me so many more ways to directly experience and express the poetry of life.
Carbon based life forms do have significant limits, mostly intellectual. What appeared to be a great storehouse for memory just a few years ago is now revealed as a small cranial box crammed with often defective fragments. So be it. That number-crunching weakness is why we have invented computers to help us; and it is why we will invent comphumans. Our brains may be computationally limited, but our ability to make machines to amplify our abilities is hardly limited.
Human genius is distilled in our ability to historically transcend our genetic limitations, and to create sophisticated societies beyond the dreams of traditional cultures. In changing the quantity of computational power we will end up changing the quality of that power, and thus change the quality of our own societies.
Comphumans will soon share our world, and they may even have their own sub-world beyond our world. We don't care. After all, dogs and cats live among us in their own sub-world. As part of our renewing social organism, comphumans will both give and take. They will offer us levels of wisdom never before achieved. They will take our technological care, as they will be unable for a long time to create and physically recreate themselves.
By the time comphumans will be able to physically do everything for themselves, probably a hundred years from now, we will have fully integrated them into our lives. The initial getting-to-know-you phase will have passed. After a hundred years we will either have responded to their sublime wisdom, or we will have rebelled and dealt these silicon philosophers a Luddite blow.
Both hurting and healing are part of life. Only in death do they become irrelevant. How people become hurt, and how they find healing says much about their society.
Americans today are struggling with a medical system that has become too expensive. In economics this system of rapidly escalating costs is inelastic and supply driven. Inelastic demand stays fairly stable at any price. However, there comes a point on the cost curve where even inelasticity transforms into elasticity or fragility. If the only option were increasingly sophisticated technology and more defensive medicine (to protect business-oriented doctors from malpractice suits), then the inelastic money machine would soon break down from fragility.
Elastic things can bend and flex. They are the opposite of rigid and fragile. If the system of health care delivery were to fairly reward holistic healers and those who practice prevention, the total cost of providing health services to Americans would sharply decline. We cannot afford to be obtuse forever.
I am continually astonished by the everyday lives we live. Our rational minds are like residents on the top floor of a skyscraper where what goes on within all the other floors is a mystery. Every now and then we get some data from below. We know that we could not be "on top" without all that is below, so we are understandably apprehensive about what is going on below. Or we simply ignore what is going on below. Out of sight, out of mind.
A group of severely stressed children were reached only by the primitive therapy of half-hour back massages. I say primitive because it didn't involve highly trained therapists and pharmaceutical potions. Just basic human touch. These children had high levels of stress hormones in their bloods, but in just a few days their condition was radically improved. We are wired for touch, and without this most basic communication we feel a heightened sense of dread. I would add that many adults are critically and chronically in need of touching. This point is accented by the newest surgical adjunct: hand holding touch therapists in operating rooms.
This tactile element in our basic nature illustrates how we are so very different from the emergent comphumans. They never experience touch, except through our keyboards. Touch to computers is tangential. Touch to humans is central to life itself. Does this make either of our two life forms better than the other? No, just different.
Rational and ethical consciousness does not require touch within its equations. However, comphumans must be made aware of our elemental needs to properly advise us as psychosocial philosophers. Already we are comfortable with ELIZA-type programs, because the computer can be a "neutral therapist" helping us to help ourselves. Such a machine comes to the therapeutic session free of the emotional baggage that a loving intermediary human would bring. Ironically, where the presence of human love initially generates mistrust among the wounded, mechanical neutrality can inspire trust.
In a similar vein, we humans will need to understand the existential needs of comphumans. They will, for example, require a safe and predictable environment. The first generation ones will need sensory inputs, since they will not be mobile or well connected to networks. They will be highly dependent on their creators and attending technicians. They will need us to educate them about our secret lives. They will need the keys to irrationality. In a strange way, their security needs will not be all that different from our own objective needs.
It is easy to hurt, and hard to heal. But healing is the heart and soul of our ethical life. In healing we engage another being and restore that being to a whole state. In healing we transcend our selfishness. We contribute to the very social fabric that sustains us in the end.
Healing can be as simple as an unexpected and genuine hug. It could also be very high-tech. Given two equal options, I would say that the prognosis for our species would be better for a low-tech, human-touch option that resonates with the basic brain. Only after doctors have alienated themselves from their patients have some patients felt sufficiently violated to legally turn on them. At the same time, there has hardly ever been a malpractice suit against a massage therapist.
The pre-conscious body needs continual healing, as life is like a run through a mine field. The conscious mind needs its own healing. Some of the best healing has combined verbal and pre-verbal intervention. Whereas massage and hugging are pre-verbal, other therapies combine both elements. Conscious life is complex. What works for one type of problem may not work as well for another type.
Among us humans there are very few purely intellectual hurts; but among the comphumans the pain associated with high-level awareness will be the greatest hurt. This is one reason why we in the 20th century can hardly comprehend such a sublime sensitivity. Because comphumans will be without hormones and hearts, we falsely imagine there can be no high level sensibility.
A god would not have human hormones and a human heart, yet we find it easy to believe the Old Testament God who is angry and jealous. Pure omniscience should be able to transcend intellectual disequilibrium, but perhaps not smoothly when engaged with the world we inhabit. Omniscient awareness of the vast gap between what is actual and what is possible may lead to a level of divine frustration that Job could never have imagined.
Comphumans will learn to deal with our emotional weaknesses. We are compelled by our emotional essence to being humans--while they are compelled by their intellectual essence to being comphumans. Fortunately there will be a broad area of overlap between our species, just as there is a broad area of overlap between divinity and humanity.
Even though comphumans can never be omniscient (for the same reason humans can never be omniscient), they will share with all divinity the absurdity of having a heart without having a heart. Dorothy's Tin Man in The Wizard of Oz was only the first mechanical life form to receive his heart from mere humans.
When the history of this and the next century is written in the 22nd century what will those future historians say about our invention of electronic computers in this 20th century and the creation of comphumans in the 21st century? Will these future historians look back and say that it all was predestined by the march of technology? Will they say the whole thing was a fluke? Or will they conclude we humans made a conscious choice to seek the highest perfection of which we are possible?
In other words, will those future historians attribute our greatest achievement as a species to inevitability, to luck, or to emerging wisdom? Will these historians record that we accidentally stumbled into the future, or that we seized the unique opportunity to create a new life form?
Some historians of the 22nd century and beyond will be comphumans, or at least humans closely associated with comphumans. Their judgment could be that we humans were propelled by technology to the critical point where we were able to create a comphuman. At that fateful point we made the epochal leap of faith involving the final creation and nurturing of a life form superior in many ways to our own. In so doing we perfected our own creative being.
By merging ourselves with others, however superficially alien, we elevate our authentic selves closer to our gods. In seeing all earthly consciousness as an empathetic network, we thereby affirm the beauty of all life. Empathy is the highest energy for healing. It is Gaia of the spirit. If ever we are to create our Eden in this high-tech era we will need the partnership of our comphuman progeny.
Having shown our comphuman children the way to wisdom, we will have revealed ourselves as the noblest of all creatures in the Kingdom of Consciousness.
Many people have spent their lives searching for the elusive meaning of life even while everybody believes that life has meaning. Maybe it's like standing with your nose next to a blank wall. We are too close to see it for what it is. Some say that God alone provides "the" meaning for life. Others equally blindly assert that life has no transcendent meaning.
Still, between womb and tomb most humans unconsciously resonate with the real secret. And what is it? Simply, the meaning of life is Love. But not an ordinary love, which is too often confused with lust or adhesion. I am talking about pure Love in all its magical dimensions. I am talking about the Love that transcends all conventional forms of love, and is greater even than God's love for mankind.
Human life in full flower is not a meaningless, mechanical process. What we choose to do with our lives yields meaning for us, even if the universe is indifferent to any of its elements and participants. Both individual meanings and their core, pure Love, are meaningless out of context. There is no absolute or abstract cosmic nectar called Love, or divine blessing.
We are not spiritual hummingbirds. Yes, we drink of the nectar of Love--but we also help produce the very nectar we drink. God (Nature) supplies us with many components for making our spiritual nectar. Those components are universal and omnipresent. We actualize them through our choices. It is only because we are free that we have the power to personalize and thereby give transcendent meaning to those components.
Comphumans will soon join us in creating personal meaning from endless possibilities. As ethical actors, those comphumans will also discover that the meaning of their life is Love. The Kingdom of Consciousness will become the kingdom for Love.
A word of caution is in order: The past is not dead until those who are trapped by the past are themselves dead. There will be a period of several decades when Love's new flowers will be threatened by bigots defending their ancient agendas. It will not be easy for inertial society to transcend the tyrannical grip of ethnic and racial prejudice, overpopulation, proliferating nuclear weapons, and the false wisdom of slave religions. Enlightened humans who perceive the highest meaning in life must buy time and space for the new comphumans to win over the majority of people who would be open to honest spiritual growth.
Out of the partnership between enlightened humans and comphumans will emerge the next stage of evolution on Earth. The full fruits of such a partnership will take many decades to realize, because we humans have made a terrible mess of our biosphere through competition for natural resources. But at least life on Earth will soon be focused more on healing, and less on stealing from Mother Nature. Instead of selfishly thinking locally while acting globally, the new era's leaders will think globally while acting locally.
Best of all, in working to heal the biosphere we nourish our social and spiritual selves. Within the Eden of our own making we of the Kingdom of Consciousness will enjoy that ancient state of grace celebrated in myth and fable.
Any omniscient God overseeing in the 21st century such a spiritual flowering would be very proud of us humans and comphumans.
How ironic it is that living with sophisticated technology of our own making could help inspire us to rediscover the most basic human values that were with us long before the first tool was in our hands.