Monday, April 21, 2008

Parallel Evolution

There is a biological concept called parallel evolution which refers to the phenomenon by which the same trait is evolves in the same manner in two different species. For example, all eyes came from the same base creature. But eventually marine and vertebrate animals both evolved the same set of eye advancements, despite lacking a common ancestor that had all those advancements. There are also cultural examples of parallel evolution for concepts like writing and the wheel. In each case, these concepts fit three basic criteria:
  • Very useful when obtained
  • Not completely obvious
  • Little to no room for implementation differences
The key point is the last one. There's just one way to do a working wheel or for writing to work, so in that respect it's not surprising that the ancestors of the Mayans and the ancestors of Chinese came up with the similar concept of putting symbols on surfaces to record concepts. (And yes, the wheel is not a completely obvious concept, because it also requires the design of an axle.)

So I often wonder what an alien life form would be like, evolved totally independent of life on Earth. What concepts are there about life that really only have one good solution? For example, the structure of the carbon atom makes it highly likely that all life would be carbon based. It's theoretically possible to have silicon based life, because carbon and silicon have similar structures (both have four possible atomic links), although silicon is much heavier, but carbon is by far the most likely. Also, water is a fascinating molecule in liquid form, used for a variety of purposes in life, so it would be very surprising to see a life form that did not use water.

All of life exists somewhere in the density spectrum between water (liquid) and carbon (solid). Some are closer to the water side, like a jellyfish. And some life forms are closer to solids, like trees. Land based animals ("fleshlings") are in the middle, and seem to offer a good mixture between the mobility of a jellyfish and a protective shell of a tree. So it seems likely that any sentient life form will be made of some kind of flesh.

So while we can guess about the physical form of another life form, deducing the nature of an alien brain is much harder. That's primarily because we don't know enough about the human brain to guess which features only have one simple solution. We've only encountered one kind of sentient life: humans ourselves. Would aliens need to sleep? Our research into sleep implies our brains use sleep to process memories, similar to defragmenting a hard disk. Important memories are stored, unimportant ones are discarded. But is our brain's implementation of memory the only way to solve this problem?

Well, computers solve the storage of data-- thoughts and memories-- through completely different means. Human thoughts are fluid. People can easily forget things that happened five seconds ago, or even remember the details incorrectly. Even photographic memory isn't perfect, and seems to decay with age. Short of hardware errors, however, a computer's memory management is perfect. All the data can be retrieved without error. The human brain seems to sacrifice accuracy for much greater storage capacity, access speed, and perhaps processing speed. It's possible there exists an alien life form that has found a solution to the memory problem that incorporates human memory capacity with computer memory accuracy, but I doubt it.

So this line of thought makes me think two things about the field of AI research. The first is that it's crucial to identify the portions of intelligence that only have one good solution and solve them first. For video game AI, these are problems like navigation and visualization. At this point everyone knows the navigation solution is A*, but there's still the question of how the mind identifies the potential way points A* requires. For general AI, the most fundamental problem is language processing, which is devilishly difficult and could take centuries to solve. But having a library of known solutions to basic AI problems will accelerate the ability to create good AI. This is similar to how an operating system abstracts away simple solution to problems a programmer doesn't want to worry about very much, such as processor scheduling and disk space access.

The second thought I have is this nagging feeling we may be designing AI on a fundamentally flawed hardware platform. The computer is excellent when what you want is ultimate precision. Designing AI on a computer involves writing sophisticated algorithms to artificially create the "randomness" that real life seems to incorporate. Perhaps one reason artificial intelligence is so hard is that sentient life requires a system of memories that trades precision for increased data capacity and faster access time the way a human brain does, and we'll never create satisfying AI until we start programming on that kind of a hardware platform. I don't know if that hardware platform is still based on transistors or if it's something more like DNA, but deep down I feel like designing AI on a computers is pushing a square peg into a round hole.

4 comments:

Novack said...

If you allow me, I think you are over simplifying one point.

Human brain dont just "trades precision for increased data capacity and faster access time".

What the human brain actually does is prioritize that HUGE amount of information that reach us every nano-second, with the only motive of let itself concentrate on more important things.

So what I want to say is: that precision capacity is not traded but discarded, deprecated. I fact, all that info is actually beeing processed, and clasified, "Relevant" or "Not relevant".

So maybe, that excellent precision of a computer is not efficient at all. If we were able to give our senses to a current machine, it would freeze instantly, saturated by the huge bombardment of info, consecuence of its inability to discard information.

What we usually forgive in the quest for an ultimate AI, is that our human brain is the result of millenia of life evolution. That evolution is the ultimate programmer, the one who designed our brain on How, When, and Which, [information] can be safely discarded.

Ted Vessenes said...

I've been thinking about this for the past few days. You are correct that the human mind stores data in what can be though of as a priority queue. But really the mind has an extremely aggressive filter. Well over 70% of incoming sensory data is ignored by the brain. The brain doesn't store data so much as the key aspects of thoughts, and then it fabricates most of the missing details based on the few key aspects.

This isn't a typical priority queue because in most such queues, data gets seldomly overwritten. For the human mind, most of the data doesn't even make it past the first filter.

So I agree with you. But I still contend that a hardware platform designed to take in lots of data and discard most of it might be a better starting point for AI research.

Novack said...
This comment has been removed by the author.
Novack said...

Glad to see your answer.
I've been thinking about this for the past few days.
You flatter me!

Indeed, I agree. Much probably the AI efforts are developed upon a flawed hardware platform, since it is by design, more oriented to the bruteforce work, to that precision you were talking about, than it is to the use of any criterion.

Hence, any later programming efforts will go necesarily against the very nature of this plataforms.

Anyway, I think this is based on the idea of a limited analytic capacity. What would happen with an entity able to receive, process and store all the information, and still function, its something that would be maybe more appropriate to H. P. Lovecraft to answer ;)