sci-fi author, beatmaker

Category: Evolution/Biology Page 4 of 5

Global To Do List (Next 1000 Years)

Ridley Scott's Bladerunner -- let's not go there.

In 1000 AD, human civilization was led by the Golden Age of Islam (with extensive trade routes, massive cities, and polymath philosopher-scientists like Alhazen) and the 100-million strong Song Dynasty of China (with such inventions as gunpowder, paper money, the movable type printing). Vikings raided feudal Europe, Mississippian culture thrived in North America, and the Aztecs had just moved to what is now Mexico. Drought and environmental collapse had recently led to the downfall of the Mayans. Just like today, the world had its bright spots and disaster areas, and plenty of areas where people just muddled along as usual.

Diagram of a hydropowered water-raising machine from The Book of Knowledge of Ingenious Mechanical Devices by Al-Jazari in 1206.

Unlike today, the world’s 300-million inhabitants did not enjoy the quality of life many of us experience via sanitation, mass production, the combustible engine, electricity, the internet, modern chemistry, materials science, telecommunications and photography satellites, advanced optics, literature, recorded music, etc. Even the brightest oracles of 1000AD could not have predicted half the miracles we experience as part of daily life. Looking forward to the year 3010, there are no doubt hundreds of technologies and planetary events (and disasters) beyond what we have imagined. Still, nothing is stopping us from considering what we, as human beings, should try to do within the next 1000 years. This is the third and final post in this thought experiment; if you like you can also read the 10-year and 100-year lists. As I’ve mentioned before, I don’t consider myself a futurist or an expert in any way — I just like to make lists and consider the big picture.

The Singularity Already Happened – Part II

The Singularity -- The Rapture for tech nerds?

In Part I of this post I challenged the idea of Vernor Vinge’s Singularity.  I also promised a response from Vinge himself.  While he hasn’t yet responded to my email inquiry, he did write a brilliant follow-up essay, in 2008, entitled What If the Singularity Does NOT Happen? The article includes dramatic section headings including The Age of Failed Dreams, A Return to MADness, and How to deal with the deadliest uncertainties? Recommended reading for anyone who likes to speculate about the future.

Rise of artificial malevolence.

In Part I, I attacked Vinge’s premises from the original 1993 paper, all of which center on the inevitability and exponential acceleration of technological progress along various vectors (computer/brain interface, artificial intelligence, biological intelligence enhancement, and the possibility of “emergent” intelligence from very large networks such as the internet).  I argued that improvements in the computer/brain interface will offer limited gains (because brains and computers actually aren’t that much alike).  I pointed out that A.I. progress has stalled out, at least in terms of “general intelligence” (some specialized information processing applications are moving forward).  I also floated the idea that the kind of “superintelligence” that futurists and science-fiction writers like to speculate about, mega-minds that operate on entirely different levels that our own puny minds can’t even conceive of; that kind of intelligence might be a fantasy.  The cognitive space might be a closed space, like the periodic table of elements (we probably don’t know all of them, but we know most of them).  Lastly, I presented the blasphemous postulate that raw intelligence might be overrated.  Major upgrades in civilization are probably more a result of other human qualities, like persistence, ambition, stubbornness, fantastical imagination, and curiosity (though the last two are arguably subsets of intelligence).

But what if I’m wrong?  In what shape would my wrongness most likely manifest?

I think there is a real possibility that The Singularity will happen, but it won’t radically change life for corporeal human beings.  Instead, an additional layer of reality will operate “on top” of life-as-we-know-it.  In order for this scenario to unfold, the following preconditions would need to exist:

  • significant advances in quantum computing, and a continuation of Moore’s Law for at least another decade or two
  • successful simulation/modeling/reverse engineering of the human brain (and entire nervous system) to the extent that human consciousness can exist/operate on an artificial substrate
  • the creation of truly immersive virtual worlds (Star Trek holodeck level and beyond)
  • migration technology; bioware systems that allow consciousness and identity to gradually shift from wetware systems (body-brains) to hardware systems (quantum computers)

The blue elves are here to stay.

Full virtualization of brains, bodies, and worlds — this is one facet of Kurzweil’s Singularity scenario.  To me this scenario seems much more likely than the rise of superintelligent (and possibly malevolent) machines.

Imagine dozens (or hundreds, or millions) of fully immersive virtual worlds, each with their own flavor, physics, social norms, etc.  Each would have numerous virtual denizens — real people who experience life just like we do.  The worlds would also host a number of part-time characters, people living simultaneous corporeal and virtual lives.  Imagine what kind of new experiences and possibilities could exist within this new layer of reality:

  • extreme longevity/immortality
  • travel via instant teleportation (effective FTL travel)
  • mental manipulation of “matter”
  • the option to share thoughts/perceptions/memories (mind-melding)
  • freedom from disease, disability, and physical suffering
  • full range of options for physical appearance (supermodel?  winged unicorn?  Godzilla?)
  • a “variation explosion” for humanity (not unlike the Cambrian explosion); many new types of humans

The list goes on and on.  Some of these you already see cropping up in existing virtual worlds (World of Warcraft, Second Life, etc.) — what is obviously missing in these cases is virtualized consciousness — players are not really in these worlds — they’re still looking at them from the outside (though imagination goes a long way).

Another Big Dodge

At various times in the history of civilization, human beings have escaped population collapse via technological trickery.  The invention of agriculture and dawn of the Neolithic age is one example (though it came at a great cost).  The invention of synthetic nitrogen fertilizer (via the Haber process) is another example.

Without Haber–Bosch, you probably wouldn't exist.

The gradual migration of humanity into virtual worlds might be our next “big dodge.”  There is a limit to the number of human beings our planet can comfortably support, and there is evidence that we are getting uncomfortably close to that limit.  We have thinned the ozone layer, triggered a process of global warming, damaged the oceans, destroyed most of the world’s forests, polluted the air and waterways, and so on and so forth.  We risk collapse of natural and food-producing systems.  How are we going to solve this problem?

It’s possible that humanity will clean up its collective act, provide better stewardship of the planet’s natural resources, and avert disaster.  It’s also possible that human population will naturally decline due to socio-cultural factors (education of women, increased literacy, reduction of world poverty, access to birth control, increased economic burden of child-raising, etc.).  But what if population keeps growing and environmental stewardship doesn’t radically improve?  We’ll have a big problem.

Will the Singularity be the next big dodge?  Perhaps virtual population will increase into the hundred billions, or trillions, while corporeal population gradually declines to one or two billion, or perhaps stabilizes around six or seven.  An ever-growing virtual population is one way we could continue economic growth (virtual people would still produce and consume “goods” and services) without negatively impacting the planet (digging up metals, cutting down trees, sucking up oil, etc.).

The Singularity Already Happened

There was a moment in human history, long ago, that irrevocably changed how human beings would experience the world.  This change was momentous; it completely destroyed “life as we know it” for all of humanity.

Early arms race artifacts.

The Singularity I’m referring to happened approximately 45,000 years ago.  It is, of course, the birth of human technological culture.  For tens of thousands of years, anatomically modern humans lived their lives with the same simple tools and the same traditions; culture was frozen.  Something (perhaps trade amongst cultures, perhaps specialization of labor, perhaps a genetic mutation related to imagination) triggered what we now call progress, that disconcertingly rapid cascade of technological and cultural change that makes each generation appear somewhat alien to the next, in its habits and proclivities.

That’s what the birth of a new evolutionary layer looks like.

Multiple Singularities

All reality can be described in terms of complicated desserts.

Reality is a layer cake.  Each layer is dependent on the layer below, and operates within the rules of all lower layers, but also has its own unique set of rules.  For example, biological interactions can be described in terms of chemistry, or even physics, but you won’t really understand what’s going on in the biological realm unless you understand Darwinian evolution.

My hypothesis is that the evolution of the universe can be described in term of multiple Singularities, with each resulting in the birth of a new evolutionary layer.  The story so far might look something like this:

1st Singularity: Big Bang
Evolutionary layer created: Atomic (stars, gaseous clouds)

2nd Singularity: covalent bonds
Evolutionary layer created: Molecular (planets, solar systems)

3rd Singularity: self-replication of long nucleotide sequences, and/or cell membrane
Evolutionary layer created: Biological (prokaryotic lifeforms)

4th Singularity: development of cell nucleus, and/or cell/tissue specialization
Evolutionary layer created: Somatic (eukaryotic lifeforms, animals with bodies, Darwinian evolution)

5th Singularity: development of complex nervous system, emergence of interiority/emotions/motivation/intelligence
Evolutionary layer created: Social (tribes, families, mating rituals, early culture, primates/hominids, also cetaceans, dogs, etc.)

6th Singularity: tool trading and/or specialization of labor, emergence of technology/cultural progress
Evolutionary layer created: Cultural/technological (memetic evolution)

7th Singularity: virtualization of consciousness?  fully immersive virtual worlds?
Evolutionary layer created: Synthetic intelligence, programmable reality

8th Singularity: ???

The bubbleverses.

Note that each Singularity is local, it happens multiple times in multiple places (including the Big Bang, if you believe any of various theories of multiple universes).

What really interests me is this: are there any generalizations we can make about Singularities?  What causes them?  Can they be predicted?  Can they be modeled/simulated?  I’ll discuss my thoughts so far on that subject in a forthcoming post entitled The Game-Changing Algorithm Nobody Is Looking For (Part III — Mutant Nodes).

The Singularity Already Happened – Part I

Buckle your seat belts, here we go.

In 1993 science-fiction writer Vernor Vinge authored a paper introducing and describing the idea of The Singularity, a near-future Rubicon for humanity; we create machines with superhuman intelligence, thus changing everything forever.  In the post-Singularity world, all the old rules are thrown out, progress accelerates exponentially, and the real action shifts away from humanity and towards our cybernetic spawn.  Human beings are relegated to the sidelines as intelligent machines take over the world (or, in darker variations of the scenario, humans are enslaved or exterminated).  In the best-case scenario, super-intelligent, immortal man-machine hybrids peacefully co-exist with the “unaltered” (i.e. regular humans).

Vernor Vinge -- this joker makes up wacky ideas for a living.

Vinge’s paper on The Singularity is clever, thought-provoking, and insightful.  It’s exactly the kind of “how big can you think” speculation a good science fiction writer should come up with.  Unfortunately, some groups of otherwise intelligent people seem to have swallowed Vinge’s paper whole and uncritically, elevating his fevered speculations to a kind of futurism gospel.  Vinge’s paper is loaded with tantalizing specificity; The Singularity will probably occur between 2005 and 2030; it will be preceded by four “means” that we can currently observe unfolding in our technology newsfeeds (biological intelligence enhancement, advancement of computer/human interfaces, large computer networks becoming more intelligent, and the development of machine intelligence and the possibility of machine consciousness).  This specificity gives the paper the feel of prophecy, at least to the unsophisticated reader.  Science-fiction connoisseurs, on the other hand, will see through the purposefully affected serious tone of Vinge’s paper; in fact he is riffing, presenting a range of wild possibilities as if they might actually happen.  That’s what science fiction writers do.

V.C. wunderkind Steve Jurvetson at The Singularity Summit, explaining how The Singularity will involve lots of corporate logos.

The inventor/entrepreneur Ray Kurzweil is particularly fond of the Singularity concept, and has written extensively about the subject in books such as The Age of Spiritual Machines and The Singularity Is Near.  He is also a co-founder of the Singularity University. Recently featured in the New York Times, Singularity University describes its mission as  “to assemble, educate and inspire a cadre of leaders who strive to understand and facilitate the development of exponentially advancing technologies in order to address humanity’s grand challenges.”  I’m skeptical; Singularity U seems like a really good way to separate rich white male tech nerds (for the most part, anyway) from fifteen thousand dollars, in exchange for nine days of hyperactive white-board scribbling, gallons of free coffee, and a bag of Silicon Valley schwag (including a personal DNA test kit).  Exponential technological progress is going to change everything.  We don’t know how exactly, but there’s going to be a big change and then everything will be different.  It might have something to do with your smart phone, social media, artificial intelligence, anti-aging technologies, space travel, and/or renewable energy!

There’s probably no harm in the existence of Singularity University.  By all accounts the people who run it are idealistic (not hucksters), and the people who take the courses can generally afford it.  But what is it, really?  It’s just more riffing, just like Vinge’s original paper.  The professors at Singularity University aren’t going to bring us any closer to The Singularity, because The Singularity is illusory.

WHY THE SINGULARITY WON’T HAPPEN

Let’s examine some of the premises of Vinge’s original paper, and discuss them in turn.

Premise #1: Improvements in computer/human interfaces will result in superhuman intelligence.

We’ve already had some improvements in computer/human interfaces, and they’ve proved to be fun and convenient.  The mouse is nice, as is the trackpad.  The portable computing device (laptop or smart phone) comes in really handy.  And we can easily imagine an implant that allows us to access the internet via thought alone, or a contact lens micro-screen that projects data over our visual field.

Oh -- that's where they are.

But let’s get real for a second.  Those of us with internet access already have near-instantaneous access to a good chunk of the world’s knowledge, right at our fingertips.  Has it changed us that much?  Instead of arguing about who was in what movie, we just look it up.  Where are the Canary Islands, exactly?  Just look it up.  What’s the four hundredth digit of pi?  Just look it up!

Having access to unlimited knowledge hasn’t changed us that much.  It’s fun, and enormously convenient, but it’s not revolutionary.

Well, what about access to computing power?  Computers can run enormously powerful simulations, and do enormously complex computations in the blink of an eye.  Won’t that make a difference?

Once again, look at how we currently use the enormous amount of computing power available to us, and project forward.  What do we do with it now?  We watch TV on our computers.  We play computer games that accurately represent real-world physics.  Maybe our screen-saver analyzes astronomical data, in search of signals from ET, or folds proteins with the spare cycles, but in neither case do we pay much attention.

Improving the interface between brain and computer isn’t going to make a big difference, because the brain/computer analogy is weak.  They aren’t really the same thing.  We’ve already gone pretty far down the computer/human interface road, with the big result being increased access to entertainment (and porn).

Premise #2: Increases in computer processing speed, network size and activity, and/or developments in artificial intelligence will result in the emergence of superhuman intelligence.

Daniel Dennett has an interesting counter-argument for people who like to speculate about superhuman intelligence by comparing human intelligence to animal intelligence, and then extrapolating to superhuman intelligence.  The speculation goes something like this; cats can’t do algebra — they can’t even conceive of it — but people can do algebra.  So couldn’t there exist an order of mind that can perform complex operations and computations that human beings can’t even conceive of? Some kind of super-advanced alien (or future A.I.) mathematics that would befuddle even the Stephen Hawking types?

Dennett points out the problem with that argument; humans possess (we have evolved) a completely different cognitive faculty that cats don’t possess.  We have the ability to think abstractly.  We have the ability to run simulations in our minds and imagine various futures and outcomes (we can run scenarios).  We can think symbolically and manipulate symbols (words, numbers, musical notation, languages of all sorts) in infinite numbers of configurations (why infinite?  because we can also invent new symbols).  In short, human beings can perform abstract mental operations.

Cats have a different relationship with symbols.

This is not to say that cats will never evolve symbolic cognition, or that the human brain has stopped evolving.  But once we possess the imaginative faculty, once we evolve the ability to perform abstract mental operations, once the cat is out of the bag (so to speak) then there can exist no idea that by its very nature is off limits to us.  Sure, some areas are difficult to contemplate.  Quantum mechanics falls into this category.  Quantum mechanics is entirely outside of our range of sensory experience (as human beings).  It’s counter-intuitive; it doesn’t necessarily make sense.  But this doesn’t mean we can’t think about it, and imagine it, and create analogies about it, and perform quantum calculations, and conduct quantum level experiments.  Of course we can.

I believe Dennett makes this argument in Freedom Evolves (but I don’t have it handy to check — it might be in Darwin’s Dangerous Idea).

I’m not saying that humans are the “end of the line” or the “peak of the pyramid.”  It’s possible, even probable, that our descendants (biological or cyborg or virtual) will be smarter than us.  It’s also likely that the future of evolution (and I mean evolution in the broadest sense) holds “level jumps” that will change the very nature of reality (or rather, add layers).  Perhaps our descendants (or another group’s descendants) will be able to manipulate matter with their mindsAkira style.  Now that would change things up.

Even the polarphant must obey the rules of Darwinian evolution.

My point is that we should question the idea that superhuman intelligence can even exist.  Certainly superhuman something-or-other can exist, but intelligence and consciousness are the wrong vector to examine.  Sure, it’s probable that something out there (either elsewhere in the galaxy, or in the future) is or will be smarter and/or more aware/sophisticated than we are, but I question the idea that an entirely different order of cognition can exist.  The cognitive space is like the chemistry space; there is not an entirely different set of elements somewhere else in the universe (or in the future or in the past).  It’s all chemistry: hydrogen and helium and lithium and so forth.  Same for the quantum physics space, once we have all the quarks and gluons figured out on our end we can surmise that it’s pretty much the same stuff everywhere.  Same for the biological space — of course not every animal in the universe is going to have a genetic code sequenced out of adenine, cytosine, guanine, and thymine, but I’m guessing the rules of Darwinian evolution are universal.  The same is true for cognition/intelligence/consciousness — it’s a space that includes manipulating abstract symbols, imagining and simulating possible futures, performing calculations, and being aware of one’s own perceptions/thoughts/emotions/identity (meta-awareness or self-consciousness).  Of course you can divide up the cognition/consciousness space into various developmental sub-levels (Ken Wilber is a big fan of this) but I don’t buy the idea that there are vastly different orders of cognition and consciousness that exist somewhere out there, in the realm of all possibility.

A very large truck ... but still a truck.

The other problem with Premise #2 is the idea that making something bigger or faster changes its nature or function.  If you increase the speed of a computer, then it can do what it already does much more quickly.  With the right programming, for example, a computer can explore a logical decision tree and look for a certain outcome; thus computers can be programmed to be extremely good at chess.  A very large network is just that — a big network — it can facilitate communications among billions of people and quasi-intelligent agents (bots, computer viruses, and so forth).  But it doesn’t become something else just because you make it bigger or faster.

New functionality does not emerge unless new structures emerge.  In nature, new structures can emerge via the process of evolution.  In the realm of technology, new structures and functions are designed, or they evolve out of systems that are designed.  We’re not going to see spontaneous intelligence (superhuman or not) emerge from the internet unless we turn the internet into a giant evolution simulator.  You could of course argue that is already is, but if so, the evolving agents are funny cat videos and naked lady pictures.  It’s memetic evolution; the funniest or sexiest or most heart-warming videos and pictures and posts thrive (get reposted/replicated) and the more complicated long-winded posts (like this one) enjoy the anonymity of obscurity.  It’s not the kind of network that is going to spontaneously generate superhuman intelligence.

Only the strongest (lolcat) will survive.

Premise #3: The emergence of superhuman intellect will result in a radical transformation of the world.

Smart people, rather myopically, tend to take this idea for granted.  Of course super-intelligence will be super important!

Historically, extreme intelligence only amounts to something when it is paired with other human qualities, like ruthless ambition, innovative inventiveness, disciplined practice, or preternatural persistence (Thomas Edison, for example, had all of those qualities).  Look around — don’t we all know someone with a shut-in uncle who got a perfect score on their SAT’s?  Or an unemployed, weed-dealing neighbor with a PhD in Semiotics?  Intelligence is a nice thing to have, but on its own it’s just a brain burning brightly — until it’s all burned up.

Can you read? Thank Johannes.

When extreme intelligence is paired with motivating factors, the world does get changed.  Gutenberg’s movable type printing press has proved influential, to say the least.  The ambitious work of Thomas Edison and Nikola Tesla gave us cheap, universally available electricity, long-burning light bulbs, and dozens of other important inventions.  Bill Gates, Steve Jobs, The Woz, and many others ushered in the era of personal computers.  Maybe one day we’ll have a particularly ambitious A.I. contribute a new mobile gadget or something.  But FTL travel?  Teleportation?  Singularity-level tech?  I don’t think so.

Look at the A.I. curve.  It’s much different than the processor speed curve.  The latter is going straight up; the former goes up and down in fits and starts.  The most promising approaches to A.I. are those that are attempting to reverse engineer the brain, and how the brain learns (artificial childhood).  Maybe, if those go really well, we’ll get an artificial inventor who will invent cool stuff.  But maybe we’ll get an A.I. that majors in Semiotics, proves unemployable, and deals weed for a living.

This post is getting too long, and I don’t want to completely doom its chances of reproductive success.  I’ll save the rest of my thoughts on this subjects for Part II, which will include:

  • When and where the real Singularity happened
  • Why I might be wrong (and in what way)
  • Vernor Vinge’s response

The Game-Changing Algorithm Nobody Is Looking For (Part II — The Traps)

Artist conception of Vladimir Vernadsky's "noösphere"

In my last post I described my take on the venerable idea that reality is composed of cumulative layers, and that the “layer cake” view of reality may give us a framework to consider evolution in a broader context (“extra-biological” evolution, if you will).

The question I posed; can we infer any commonalities regarding how one layer emerges from the previous?  Can we construct an algorithm that describes 1) how the molecular layer emerges from the atomic layer, 2) how the biological layer emerges from the molecular layer, 3) how the somatic layer emerges from the biological layer, and so forth?  And if we have such an algorithm in hand, what can we do with it?  Given a sufficiently powerful computer, can we simulate the entire universe?  Can we predict the next layer, or actually generate it within a simulation?

Let me start by warning of a few traps — traps I’ve fallen into at various times while thinking about the question above.

The first trap is looking for any sort of neatness or directionality, when examining the results of evolution.  The “stuff” we find in the universe may be generated by simple mathematical algorithms (watch this video to see what I mean), but the results are generally quite messy.  Everywhere we look we find complexity, exceptions, and irregularities.  For example, in the realm of biology, the core concept of “species” is notoriously hard to define (so much so that there is even something called “The Species Problem“).  So we should be wary of any model that classifies reality into neat, fixed categories (like the pre-Copernican map of the solar system below).

Ptolemaic/geocentric conception of the solar system. Neat, tidy, and wrong.

The same goes for directionality.  It is tempting to look at relatively simple bacteria (which have been around for a rather long time), and then look at relatively complex human beings (who have been around for a rather short time), and then conclude that evolution is moving in a direction; from the simple to the complex, or from the stupid to the intelligent.  This idea, as any biologist will tell you, is wrong.  Evolution (biological evolution, at least) moves towards whatever forms are most fit for a given environment.  Evolution actually prefers simplicity in a way (simpler forms are often more efficient, and thus more fit); the only reason complex forms (like people) exist at all are because all the environmental niches for simple lifeforms are all filled up.  It’s mighty competitive, down there, for bacteria and the like.  Evolution goes in whatever direction it finds success, be it towards simplicity, complexity, stupidity, intelligence, speed, sloth, or whatever.

The second trap (or third, if you want to count neatness and directionality as separate) is taking a human-centric view.  This has been a common trap in the history of scientific and philosophical inquiry.  The geocentric map of the solar system above is one example.  Copernicus (with help from the earlier work of Aristarchus) displaced Earth (and everyone on it) from the center of things, instead putting the sun at the center of the solar system.  Isaac Newton furthers our discomfort and reduces our specialness with his theory of universal gravitation; the same force that makes objects fall to the ground governs the movement of the planets and moons.  Darwin pushes human beings out of the spotlight with this Theory of Evolution; instead of being created in a divine image, human beings evolved from apes.  We’re but one species on one planet.  Modern telescopes push our little planet out ever further; our little solar system isn’t even in the center of the galaxy, and how many galaxies are there?  125 billion, says Hubble?  The denigration and humiliation continues to this day; modern physicists ask us to consider that the term universe may be a misnomer; the thing we consider to be everything may be just another grain of sand on a beach of multiverses.  The more we look at reality, the further from the center of things we find ourselves.

How does the human-centric trap relate to the consideration of extra-biological evolution?  It relates to the question; what is a unit of evolution?  What entity, or agent, is evolving, on each layer?  Genes evolve on the biological layer, bodies evolve on the somatic layer, and memes or ideas evolve on the memetic or cultural layer.  But where do people fit in?  On what layer are we evolving?

In short, we don’t have our own layer.  We exist on multiple layers.  At least in the way we think of ourselves, we aren’t replicable units.  On the somatic layer, human bodies can make more human bodies, but even identical bodies don’t make for identical people (as anyone who has known twins can tell you).  We think of ourselves as bodies with personalities; both our cultural and genetic heritage make up our identity.  And of course we also exist on the quantum, atomic, and molecular levels, though most of us don’t commonly think of ourselves that way.

This doesn’t preclude that within some future layer, some future version of humans beings might become replicable units (if our bodies and personalities were entirely digitized, and living in a virtual world or worlds, perhaps).  But that’s a different question.

The last trap, for lack of a better term, is uni-dimensionality.  An example of this type of thinking is supposing that stars and solar systems are on a different evolutionary layer than molecules, or that a structured community of creatures, like a beehive, or a human city, is on different evolutionary layer than the individual lifeforms.  The Global Brain concept is an example of falling into this trap.

Ken Wilber presents a better option for looking at extra-biological evolution; the Four-Quadrant model.

Ken Wilber's Four Quadrant Model of Just About Everything

Wilber divides reality into four quadrants, along two axes.  The first axis is the individual/collective axis.  Galaxies are the collective form of atoms; planets are the collective form of molecules, and so on.  The second axis is interior/exterior.  Our subjective experience as human beings is the interior form or manifestation of our brain-body as a physical, exterior form.

Wilber has written a great deal about his four quadrant model.  I would recommend reading Sex, Ecology, and Spirituality, as well as the more recent Integral Spirituality.

I think Wilber himself falls into the neatness trap, and possibly the directionality trap, with his four quadrant model, but the multiple quadrant idea is still a good one.  I think Wilber’s 2nd axis (interior/exterior) is something that emerges with complex brains.  Wilber’s model seems to imply that consciousness is a fundamental property of matter, and brains merely refine it.  As I’ve mentioned before, I’ve switched over to Dennett‘s camp with regards to consciousness.

As for directionality, Wilber’s main interest is higher consciousness, so that’s his bias when looking at evolution.  There’s no harm in taking a closer look at the particular evolutionary vector that may be leading towards higher consciousness or intelligence, but I’m more interested in the general algorithm that describes (and can hopefully predict) the emergence of new layers (regardless of whether or not higher consciousness is a result).

So, I’ve promised a lot, haven’t I?  An algorithm (or at least a model that can be easily simulated) that describe the emergence of new evolutionary layers.  My model will not fall into the traps of neatness, directionality, human-centrism, or uni-dimensionality.  Will I deliver?  You’ll have to wait for the next post.

The Game-Changing Algorithm Nobody Is Looking For (Part I — The Question)

An ecology of molecules.

A problem I’ve been thinking about for the last twenty-five years or so (I’m a slow thinker, and it’s a big problem) is how new levels, or layers, of reality are created.

For example, what exactly is the process by which the molecular layer of reality is created from the atomic layer of reality?  How does the genetic or biological realm or layer emerge from the molecular?

We know how these things happened, specifically.  For example we know that atomic elements (like hydrogen, oxygen, and nitrogen) were ejected into space from stars.  Some of these atoms linked to each other with a new type of bond (a covalent bond, where the electron rings overlapped, as opposed to the simpler ionic bond).  In this way the first molecules of the universe, like water and ammonia, were formed.  Thus the molecular layer was born.

We also know, more or less, how the genetic/biological layer was created.  Certain types of protein macromolecules — chains of amino acids, or nucleotides — developed the trick of self-replication; assembling copies of themselves from smaller pieces (amino acids).  This led, eventually, to a kind of proto-RNA, and eventually (with the addition of cellular membranes), the first prokaryotic lifeforms.  Hello biological layer.

Diatoms, tiny eukaryotic lifeforms.

What we don’t know, is the rule-set, or algorithm, for how a new layer of reality is created.  Can the process be abstracted?  Does the jump follow a particular set of consistent rules?  There don’t seem to be very many people even asking these questions.  To me, these questions are incredibly important.  I’ll explain why in a moment.

There is plenty of room for debate regarding what constitutes a new layer.  For example, life, on Earth, goes on for some time before anything even resembling what we consider to be a body emerges.  So perhaps we can separate the somatic layer of reality from the biological layer.  But what triggers the creation of the somatic layer?  Is it the emergence of a new cell structure, the nucleus, that gives rise to eukaryotic lifeforms?  Or is the ability of cells to specialize that creates the first true somatic forms (like the famous hydra).

Each new layer of reality is fully dependent on the lower layers (you can’t have molecules without atoms) but it is also distinct — the new layer offers new types of structures, agents, interactions, rules, spaces, etc.  You can even apply the general principles of evolution (like mutation, selection pressure, fitness criteria, etc.) to each layer of reality in the abstract model we’re constructing.

But how do we get from one layer to the next?  This question is often ignored.

My cosmological viewpoint.

For example, the Maxis game “Spore,” created by Will Wright, models several layers of reality.  There is a cellular layer, a biological layer, and a cultural/technological layer.  The mechanics of the transitions, however, are glossed over.  What are the overarching rules that apply to all the layers, and how, exactly, do we get from one to the next?

On planet Earth’s evolutionary time-line, things start to get interesting when consciousness emerges (and I realize not everybody thinks that consciousness is an emergent phenomena–Daniel Dennett won me over to this idea).  What I would call the social layer of reality emerges, with animals, propelled by emotional impulses, interacting sexually, familially, and territorially.

Relatively soon after, big-brained primates learn to think abstractly, plan, and manipulate their environment in complex ways,  thus introducing the cultural layer of reality.

Various technological layers follow.  Our current state of reality, the half-cyborgified human operating half in physical reality, half in virtual space, with instantaneous access to all the world’s information, with a just-emerging ability to manipulate its own genome, is most likely not the end of the line.  It’s likely, unless we self-destruct sooner than expected, that new layers of reality will continue to emerge.

But how?  What exactly is happening?  Is there any way to simulate the emergence of a new layer?  There isn’t, unless you have a model and an algorithm.

It’s necessary to define, in abstract terms, what exactly constitutes a new layer.  And that’s just the first step.

So why is answering this question important?

1) So we can perform interesting simulations.
With quantum computing, we’ll have an enormous amount of processor power at our disposal.  We’ll be able, potentially, to model evolution itself (not just biological evolution, which we can already model in a fairly sophisticated way, but the multi-layered evolution of the universe itself).  But we’ll need models — algorithms and rule-sets — to plug into the computer.  We need to better understand the multi-tiered nature of reality in order to simulate it.

2) So we can understand extra-biological evolution, beyond the realm of metaphor.
Richard Dawkins introduced the concept of memetics — applying the principles of biological evolution to culture.  Certain memes (words, phrases, melodies, ideas) survive and thrive in memetic space (in our minds and media) because they are more fit than others (fitness in this context being catchiness, replicability, aesthetic value, usefulness, entertainment value, etc.).

It’s a brilliant idea, but the study of memetics is arguably dead.  The field has failed to advance beyond the realm of metaphor.  The most basic question — what is or isn’t a meme — has never answered to the degree where memetic evolution could even begin to be measured.

By understanding and defining exactly what constitutes a layer of reality, and what constitutes an agent, or unit of evolution, within that layer, we might be able to start looking at extra-biological evolution (evolution in general) as a quantifiable field, and not just a grand analogy.

In my next post, I’ll offer my take on The Answer to Life, the Universe, and Everything (and it won’t be 42).  I’ll present my definition of what defines a level of reality, and put forward one possibility for how we can model the jump from one layer to the next.

Page 4 of 5

Powered by WordPress & Theme by Anders Norén