Monday, April 28, 2008

In the Cleft of the Rock

As I mentioned earlier, I was a devout Christian when I started programming BrainWorks, and now that I've finished, I am an agnostic with no particular religious leaning. My work in artificial intelligence was not the only reason I gave up my religious faith, but it is part of the reason. Moreover, it is a story worth telling, and I hope it will help both Christians and non-Christians gain a greater understanding of each other, something that is sorely lacking in this world where rational justification often takes a back seat to dogmatic conclusions.

To understand the process of giving up my faith, however, you must understand the faith I had. Many people who call themselves Christians, perhaps most, don't have much to do with the actual tenets of the religion. To the average Christian, being a Christian means that God and Jesus love you, and Jesus died for your sins so you that when you die, you go to Heaven instead of Hell. In other words, Christianity is a nice feeling in your heart and your insurance policy for when you die. It has little impact on your actual life, except for a general imperative to "do good things", which many Christians ignore, or act as if it only applies to other Christians.

I was never this kind of Christian.

My core belief has always been in causality, and by extension rationality. I have always believed that if something is true, you can trust in its consequences to be true as well. So I believed in the core tenet of Christianity, which is:
Jesus died to pay for my sins and rose from the dead so that I could receive God's spirit and thereby have a loving, personal relationship with the God in this life, and be with God in heaven after I die.
And I believed everything that logically follows if this is true. So yes, I believed that God can and does do miracles, even in this day and age. Real miracles too, not "I saw the face of Jesus in my pop tart". Stuff like "I was born blind and now I can see". I believed that people can and did hear from God, and that if you pray to God, he hears you as well.

Note that this is fundamentally different from saying that the Bible is the 100% true, infallible word of God. There are some clear logical inconsistencies, and I just wrote that off as "I guess they didn't hear correctly from God about that part", or "someone corrupted this text for their own purposes". The most glaring example is the book of Daniel, which purports to be written during the Babylonian captivity. However, the book contains words whose linguistic roots trace back to Persian empire, a culture the Israelites didn't encounter until well after Daniel died.

There were also some religious statements that don't logically make sense. The traditional explanation for these things, such as "don't eat pork", is that they apply to older cultures for some reason but don't apply to the culture of today. However, I took the stance that maybe people just didn't hear correctly from God and the Bible's authors recorded the wrong thing.

For example, even as a Christian I did not think homosexuality was immoral. There is no logical reason that consenting sex between people of the same gender would be wrong but between opposite genders would be okay. "God hates it for some reason" isn't a good argument. According to the book of Jeremiah (Jeremiah 7:19), things that are wrong because they are detrimental to humans. There is certainly nothing humans can do to injure God! Our actions can only harm humans, so if an action does not hurt anyone, it cannot be immoral. Consensual sex falls into this category regardless of gender.

So as a Christian I had no problem admitting that the Bible flawed and even wrong about some things. I disagreed with some very popular Christian opinions, and not just regarding homosexuality. But I still considered myself a Christian, because you can still believe the Bible is wrong about some things (bacon, homosexuality) and right about other things (Jesus died so God can have a relationship with me). I still believed that I could hear from God, talk to God, and witness the supernatural miracles of God.

But around one year ago everything changed, for two very different reasons. One reason convinced the pragmatist in me and the other convinced the idealist, and the two arguments together made me undeniably admit that I had been very, very wrong for the past 30 years of my life.

Please believe me when I say that I was not looking for a reason to leave my religious faith. As a Christian, I felt totally in love with God. To this day, I have never felt more joy in my life than during a Sunday worship service, singing hymns and praises. Being confronted with the irrefutable reality that I was wrong about God was agonizing and heart wrenching, and met with many tears. For months I felt devastated, and I still wonder if I will ever find something that brings me as much joy. But I had no other choice. I do not worship God; I worship truth, and by extension reality. If the truth is that there is no God, or at least no Christian God, then my only option is to act on that truth.

I am confident that most atheists and agnostics do not have the faintest idea how much safety and security a devout Christian gets from their religion. From the agnostic point of view, it's easy to think of Christians as weak minded for taking so much on faith and so little on reason. If a Christian doesn't change their mind when presented with solid reason, the conclusion seems natural. But it's difficult to comprehend the amount of mental and emotional security that even a religion brings, even a false one. The sheer fear of being wrong is enough to make most religious people flat out ignore solid arguments to the contrary.

I'm glad I took the red pill, but the blue pill would undoubtedly have been less painful. It is my hope that godless people would have compassion on those who still have religion, and understand that even though many may fear they are wrong, they lack the emotional strength to take the large steps that seem easy for us.

Monday, April 21, 2008

Parallel Evolution

There is a biological concept called parallel evolution which refers to the phenomenon by which the same trait is evolves in the same manner in two different species. For example, all eyes came from the same base creature. But eventually marine and vertebrate animals both evolved the same set of eye advancements, despite lacking a common ancestor that had all those advancements. There are also cultural examples of parallel evolution for concepts like writing and the wheel. In each case, these concepts fit three basic criteria:
  • Very useful when obtained
  • Not completely obvious
  • Little to no room for implementation differences
The key point is the last one. There's just one way to do a working wheel or for writing to work, so in that respect it's not surprising that the ancestors of the Mayans and the ancestors of Chinese came up with the similar concept of putting symbols on surfaces to record concepts. (And yes, the wheel is not a completely obvious concept, because it also requires the design of an axle.)

So I often wonder what an alien life form would be like, evolved totally independent of life on Earth. What concepts are there about life that really only have one good solution? For example, the structure of the carbon atom makes it highly likely that all life would be carbon based. It's theoretically possible to have silicon based life, because carbon and silicon have similar structures (both have four possible atomic links), although silicon is much heavier, but carbon is by far the most likely. Also, water is a fascinating molecule in liquid form, used for a variety of purposes in life, so it would be very surprising to see a life form that did not use water.

All of life exists somewhere in the density spectrum between water (liquid) and carbon (solid). Some are closer to the water side, like a jellyfish. And some life forms are closer to solids, like trees. Land based animals ("fleshlings") are in the middle, and seem to offer a good mixture between the mobility of a jellyfish and a protective shell of a tree. So it seems likely that any sentient life form will be made of some kind of flesh.

So while we can guess about the physical form of another life form, deducing the nature of an alien brain is much harder. That's primarily because we don't know enough about the human brain to guess which features only have one simple solution. We've only encountered one kind of sentient life: humans ourselves. Would aliens need to sleep? Our research into sleep implies our brains use sleep to process memories, similar to defragmenting a hard disk. Important memories are stored, unimportant ones are discarded. But is our brain's implementation of memory the only way to solve this problem?

Well, computers solve the storage of data-- thoughts and memories-- through completely different means. Human thoughts are fluid. People can easily forget things that happened five seconds ago, or even remember the details incorrectly. Even photographic memory isn't perfect, and seems to decay with age. Short of hardware errors, however, a computer's memory management is perfect. All the data can be retrieved without error. The human brain seems to sacrifice accuracy for much greater storage capacity, access speed, and perhaps processing speed. It's possible there exists an alien life form that has found a solution to the memory problem that incorporates human memory capacity with computer memory accuracy, but I doubt it.

So this line of thought makes me think two things about the field of AI research. The first is that it's crucial to identify the portions of intelligence that only have one good solution and solve them first. For video game AI, these are problems like navigation and visualization. At this point everyone knows the navigation solution is A*, but there's still the question of how the mind identifies the potential way points A* requires. For general AI, the most fundamental problem is language processing, which is devilishly difficult and could take centuries to solve. But having a library of known solutions to basic AI problems will accelerate the ability to create good AI. This is similar to how an operating system abstracts away simple solution to problems a programmer doesn't want to worry about very much, such as processor scheduling and disk space access.

The second thought I have is this nagging feeling we may be designing AI on a fundamentally flawed hardware platform. The computer is excellent when what you want is ultimate precision. Designing AI on a computer involves writing sophisticated algorithms to artificially create the "randomness" that real life seems to incorporate. Perhaps one reason artificial intelligence is so hard is that sentient life requires a system of memories that trades precision for increased data capacity and faster access time the way a human brain does, and we'll never create satisfying AI until we start programming on that kind of a hardware platform. I don't know if that hardware platform is still based on transistors or if it's something more like DNA, but deep down I feel like designing AI on a computers is pushing a square peg into a round hole.

Monday, April 14, 2008

BrainWorks 1.0.1 Released

I've just released the latest update to BrainWorks implementing the fixes I talked about a few weeks ago. Basically there's a much more sophisticated algorithm for tracking and estimating how likely a bot is to shoot a weapon in a given situation. You can download them using the links on the right if you're interested in trying it out. Let me know what you think!

Related to this, I've been thinking about the question of when software is done. In some sense I still stand by my answer in The Attitude of the Knife, which is "when you say it is". The flip side is that as long as you still have ideas and commitment, there's always room for continual improvement. For a simple program meant to meet a specific purpose, such as reading your email, there comes a point where there isn't much more room for feature development. Artificial intelligence isn't like that though. To this day, researchers are still trying to figure out how different areas of the human brain work. And contrary to popular opinion, humans are not the end of evolution. The human brain itself continues to refine and advance itself through generations.

I believe the line dividing things that can be finished and things that cannot is the line of self reference. When the problem is best solved by something that can analyze and correct its own mistakes, a whole new field of issues apply. For an in depth explanation of why this is, I highly recommend the second of three books that influenced my mental framework for understanding the world. That book is Godel, Escher, Bach: The Eternal Golden Braid, and it is about the nature of intelligence. Very roughly paraphrased, the book talks about the mathematical theorem known as Godel's Incompleteness Theorem, which says that any system capable of describing itself can describe statements that are true but unprovable. Originally the theorem was discovered in attempts to work out some issues in Principia Mathematica, an attempt to derive all mathematical truths from first principles. However, the incompleteness theorem sheds unanticipated insights on the areas of philosophy and by extension the nature of thought and intelligence.

Viewed in the context of intelligence, you could conclude that there are things which an intelligent person would do, but there is no describable algorithm that could conclude what those things are. Perhaps this represents acts of creativity, intuition and insight. Or perhaps those things are describable, but other things are not.

Applied to Artificial Intelligence, it means that there are some aspects of AI that we cannot solve; we can only approximate. And there's always room for better approximations. This is the real that AI development can last forever. You're never really done. At least not until you say you are.

Monday, April 7, 2008

Getting it "Just Right"

There's not much difference between driving 55 and 56. Unless of course the speed limit is 55, in which case that small amount of speed could make you get a speeding ticket. Standard outside temperatures average between 0 and 100 degrees Fahrenheit, but people easily notice the difference between 68 and 72 degrees. And yet for other measurements in our lives, small changes are completely unnoticed. A car with a full gas tank drives just as well as a car with a quarter tank left-- you only notice the problem when the last drop of gas is gone.

When you design artificial intelligence, everything needs to be broken down to numbers, since a computer can only understand numbers. And a lot of times you have no idea what numbers best simulate the behavior you want to elicit from the AI, or even if the number encodes the correct concept.

Think of the problem this way. Suppose I have a value that encodes the maximum error a bot can make while attempting to aim. If the AI doesn't behavior properly with the initial value I selected, I'll want to try other values. By trying different values, I'll rapidly find out whether this number is very sensitive to changes (like temperature) or insensitive (like a gas tank). But if the bot doesn't seem to be doing the right thing, there's still no way to know what the right value is.

A sensitive number is extremely hard to tweak, because you won't see the right behavior until you get things "just right". If the value doesn't encode the right concept, then that "just right" state won't exist, but you'll never know that. You'll just see how all kinds of different values don't work in different ways.

And an insensitive number generally just has an impact when it crosses an important boundary (for example, driving 56 instead of 55, or your gas tank being 0% full instead of 1% full). There's often no indication where this interesting numerical boundary might be.

Additionally, values can depend on each other. Perhaps changing this maximum error value makes some other value be sensitive, when before that value was insensitive (and thus not aggressively tweaked).

All this to say that when you write AI, there's an awful lot of number tweaking going on. Either you do it by simulations, by neural network training, genetic algorithms, or by hand. But one way or another, you'll have dozens of numbers that all need to be "just right" to present the illusion of intelligence. The good news is that once you've spend time finding these values, you'll have excellent insight into how humans approach the problem. That's how research works, I guess.

For example, do you know what value is the single biggest factor in determining a player's aiming accuracy? It's how fast the player moves the mouse, not the error in mouse movement. The most important factor is the maximum allowed acceleration of the bot's simulated mouse. I know this because of all the values that go into aiming (and there's close to a dozen of them), the acceleration factor by far has the largest impact on actual bot accuracy. There are some very good reasons why that is the case, which are a bit to complicated to explain right now. But simply change the the maximum acceleration for high skill bots from 1400 to 1600 and you'll see an immediate increase in actual bot performance. This value was one of the last values I decreased before release, actually, so BrainWorks bots didn't have the same ungodly aim that traditional Quake 3 bots have.

At any rate, getting artificial intelligence to feel "just right" really comes down to finding the appropriate things to measure and the values those measurements should have. There's no right way to do this, and certainly no magic solution. BrainWorks uses statistical modeling for some things (weapon accuracy) and fixed numbers derived from simulations for others (aiming). Seeing how much the final result can change from just minor alterations in one number certainly gives an appreciation for the complexity that goes into genuine intelligence. If changing just one of twelve values has a dramatic impact on how skilled a bot appears, try to imagine the complexity of millions of neurons in your brain. And yet they all operate together in a (mostly) cohesive unit. It boggles the mind.