Showing posts with label error correction. Show all posts
Showing posts with label error correction. Show all posts

Monday, October 19, 2009

We Can Worry About That Later

As I've been in the middle of purchasing a house, my life has been extremely hectic. The past several weeks have been filled with housing inspections, mortgage application, reviewing and signing legal documents. And naturally every detail has someone who wants to renegotiate it. After closing, we'll still have painting, moving, decoration, and furniture purchases to handle. My life is really stressful.

A few days ago my wife and I were reviewing the upcoming task list and for one item I mentioned, "we can worry about that later". I then realized I use that phrase an awful lot, and to me it means "we can handle that later". In other words, to me worrying is synonymous with work. Put negatively, I don't stop worrying until the work has been completed.

It was an excellent moment of self reflection. I simultaneously realized why I'm so driven, intense, productive, and stressed. This life attitude has its benefits, but it's certainly not healthy in the long term. A few weeks ago I wrote about how the secret of a successful marriage is reducing stress. Perhaps it's better to say:

The secret of happiness is reducing stress.

After all, what else is stress but discontent about the possible future? It's a tricky balance though. If you live life in the future, you'll solve all these potential problems but always be tense as a result, never enjoying the present. If you live in the present you'll enjoy it only until something happens that you should have dealt with. How do you focus on the present while building your future? Personally, I feel like I don't balance these constraints very well.

So I've been thinking about different ways to manage stress. Some common things people try include:
  • Eat and/or drink
  • Have Sex
  • Exercise or spend time outside
  • Sleep or practice deep breathing
  • Read a book, watch a movie, or play a game
  • Daydream or imagine good things
  • Procrastinate by doing less important work
  • Remove the source of stress
Personally I spend a bit too much time on the last two items. Classifying these options, they seem to fall into one of three categories:
  • Solve the issue
  • Ignore the issue
  • Accept the issue
Unfortunately if the only mechanism for relieving stress is to solve the problem, you're in for a rough life-- there's always something else you can worry about, and many things you can't fix Ignoring issues seems fine for small problems. And acceptance is the only option available for problems too large to be solved or ignored.

I believe the real secret to happiness is properly identifying which problems should be accepted and which should be solved. And then realizing that most problems are of the former type. It's easy to get caught up in trying to fix everything, especially as a perfectionist. But the more you genuinely accept misfortunes as Not A Big Deal, the more you can enjoy the truly good things in your life.

If that's true, then the real secret to happiness is forgiveness.

Monday, May 4, 2009

Living In The Right

I recently had the chance to watch Alexandra Pelosi's documentary Right America: Feeling Wronged. (I'm assuming the video is legally viewable on the internet; if it's not, this review contains a reasonable synopsis.) The film follows the McCain/Palin 2008 presidential campaign and interviews the supporters on their beliefs. It's roughly 45 minutes long, but I found it well worth my time.

First, a few disclaimers. Yes, director Alexandra Pelosi is the daughter of current House Speaker Nancy Pelosi. And like every other documentary, I'm sure there's an agenda behind it. But this isn't a Michael Moore documentary, where the whole film is a setup to make a targeted person or group feel uncomfortable and mocked for their beliefs. That's not what this film is about. Alexandra seems to understand that her political connections automatically make her motives suspicious, and she generally works hard to ask unbiased questions, without making the subjects feel like the purpose is to laugh at them. For example, after seeing an ornament at a trailer park labeled "Redneck Wind Chimes", she asked the residents this series of questions:
  • Are you rednecks?
  • What is a redneck?
  • Do you use redneck as a derogatory term?
Whether or not you actually laugh at their responses is your choice. I found most of the people surprisingly articulate, and more than anything, I felt pity for them. At any rate, the movie gave me a lot to think about, but I took two very important things out of it. Both of them changed my perspective on the conservative / liberal divide in America right now.

A) Most conservatives viewed the election as a choice between right and wrong.

Overwhelmingly the people interviewed has the viewpoint that McCain was clearly the right choice for president, and Obama was the wrong choice. In contrast, I feel like most liberal voters considered Obama the better choice and McCain the worse one. In other words, conservatives tended to evaluate candidates from the standpoint of black-and-white morality. For example, if you believe that abortion is a sin, then voting for a president who supports abortion must be a sin as well. That makes Obama an immoral choice.

I believe most liberals evaluated the candidates pragmatically. A typical liberal voter's top issues aren't morally charged in nature, so their voting selection process is based on which candidate will better address their issues. Voting for "the other guy" is merely a bad decision, not something that moves you one step closer to Hell.

B) Most conservatives have logically consistent opinions derived from a different set of premises.

These people gave Obama a good serving of all kinds of random accusations. Different people accused him of being a Muslim, a terrorist, the next Hitler, and even the Anti-Christ. These viewpoints are totally preposterous, but I want to stress that for the most part, the conservatives are acting rationally given their assumptions. If Obama really is a terrorist, then of course you shouldn't vote for him!

Most of the conservatives interviewed didn't even seem to be aware that other people might not share their premises. I got the distinct impression that many of these people couldn't figure out why liberals would vote for a known Muslim, and the best conclusion they can come to is that "those liberals must hate God".

I really want to stress this point, because it's very important. America's far right wing conservatives are not crazy. They are acting rationally from the basis of their premises, and they are not aware their premises might be incorrect. And for the record, liberals are the same. They are also not aware that their premises might be incorrect.

So if someone claims that Obama is the next Hitler because both were charismatic leaders who came to power at times of national distress, you won't make much headway if you flat out disagree with them. Saying, "Don't be silly; of course Obama isn't going to invade foreign nations, killing enormous numbers of innocent civilians," might be true, but it won't change their mind. It is better to discuss their flawed premise rather than disagree with their conclusion. For example, "Lincoln, Roosevelt, Washington, and Churchill were also charismatic leaders who came to power during national distress, and they were some of the best leaders of all time. Being charismatic doesn't mean he's evil."

This doesn't just apply to political groups either. It's easy to assume that any time someone disagrees that they are simply stupid or crazy. But more often than not, they are simply mistaken. That's why I have pity for both rednecks and Christians rather than disdain. And as a Christian, I pitied athiests. The vast majority of people are simply misinformed, and if you take the time to have an honest conversation with them, both of you will be better off for it.

Monday, April 20, 2009

Mental Filters

I write about a pretty wide variety of topics. Some common ones are:
  • Algorithm Design
  • Ethics and Morality
  • Philosophy
  • Politics
  • Learning
While these topics seem somewhat distinct, they all intersect on one question:

What things are good and how can I best do them?

The site's tag line is "Genuine and Artificial Intelligence", but the purpose of intelligence is to do good things, rather than bad things, inefficient things, or evil things. I think of each post as an opportunity to learn how to learn.

So it shouldn't surprise you that I love well reasoned and articulated explanations, especially things explaining how we think. For example, this essay on Lies We Tell Children was worth a blog post. I found one last week on the subject of open-mindedness. It's well worth the 10 minutes, so I highly recommend watching it:



There's a lot of good stuff in there, but I particularly liked the point about mental filters, starting at roughly 7:23. A lot of people think that being open-minded means you are willing to consider any idea, which essentially means your brain lacks a mental filter. They define being open-minded as accepting any idea you are told, no how far-fetched the circumstantial evidence is. What this really means is, "you should agree with me no matter what." And ironically, that's the exact definition of being close-minded, as it means they aren't open to other ideas.

The main point is that being open-minded means considering the evidence and logic for different explanations, then believing the explanations that best fit the evidence and throwing out the ones that don't. In other words, being open-minded is predicated on having an evidence filter.

Everyone is confronted with both true and false concepts all the time, and it's impossible to have a world view that only believes the "correct" things. Statistically speaking, you are virtually guaranteed to believe some ideas that are flat out wrong. But the objective is to minimize the percent of incorrect ideas you believe. That means the key word is "filter". If you have a brick wall for your mind, you are unable to accept any new ideas, so the incorrect premises you have will only turn into more incorrect conclusions. And if you have no mental protection at all, you'll believe all kinds of crazy things.

I believe many people who are truly close minded mistake the mental filters other people have for brick walls. They falsely conclude that because someone rejected their idea, that person must not be open to any opposing ideas, rather than simply having a good reason for rejecting theirs.

All of this is well and good for understanding "the other guy". But how do you make sure that you aren't "the other guy"? Remember, the most close-minded people think they are open-minded, in the same way that arrogant people think they are humble. There's no firm method to conclude whether you are actually open-minded, but keep this in mind: Truly open-minded people occasionally change their beliefs. Ask yourself this question:

When was the last time I philosophically disagreed with someone, and after we expressed our beliefs, they changed my mind?

If you think you are open-minded but can't remember the last time someone else changed your opinion on something substantive, you are probably more close-minded than you think. Recognizing when you are wrong and having the humility to admit it is the true mark of an adult mind.

Monday, March 9, 2009

Love Thy Neighbor

Earlier this week, a reader made an interesting comment on my God of Stone post. Bryan analyzed the logic in the post and pointed out that it seemed to argue both in favor of and against revisionism in religion. My argument is actually progressive in nature. I favor revision religion when it's an improvement for humanity and I'm against revisions that are not. For example, removing the moral restrictions on eating pork is a good revision. Praying to a statue of the infant Jesus is not.

As a reminder to newer readers, I am not a Christian anymore, although I was a highly devout one for thirty years. For an explanation of why I left my religion behind, you can read the posts In The Cleft of the Rock, The Man Behind The Curtain, and The Emperor's New God. I am now a humanitarian agnostic. In other words, I'm not sure if God exists, but I believe morality and ethics have meaning even outside the context of God.

While I don't agree with everything the Bible says, I certainly understand the perspective of those that do. Having studied the Bible for decades, I'm convinced that the biblical authors had a similar perspective-- that the true measure of morality and immorality is whether an action helps or hurts humanity. Furthermore, religious authorities in the Bible had no problems revising laws when the new law provided a greater benefit to humanity.

The clearest statement of this philosophy comes from Jeremiah 7. Jeremiah has received a message from God for the people is Israel and soundly chastised them for their idolatry and other immoralities. Then in verse 19, it says:
"But am I the one they are provoking?" declares the Lord. "Are they not rather harming themselves, to their own shame?"
In other words, no amount of immorality can possibly harm God. The whole reason things like idolatry are sinful is that they harm the sinner, by pushing them away from a God that loves them and will bless them with his presence.

This is a crucial argument. Given the premises of the Christian religion, it is absolutely impossible to harm God. Because God loves humanity, he wants them to prosper. Therefore, the only things that God considers sin are those things that work against the humans he loves. So if something doesn't harm any portion of humanity, it cannot be a sin.

Things are only sinful if they harm humanity.

I encourage Christians to stop thinking about God as a random set of likes and dislikes. "God liked Jews and hated pigs. Then later he decided pigs were alright, but homosexuality was still bad." That's just irrational. If you believe in intelligent design as most Christians do, then you need to accept that your God is rational and work from there. God likes stuff that helps humanity and hates stuff that hurts humanity. He might have a better understanding of what "help" and "hurt" mean, but that's as simple as it gets.

When you view the evolution of religion as a gradual improvement on rules that benefit humanity, many stories in the Bible make more sense. For example, in Genesis 9:3-4, Noah has just survived the Ark, and God changes the law of what food humans are allowed to eat. Before the flood, humanity was supposed to be strictly vegetarian, but now God says it's okay to eat animals. Why did God revise his law? Animals were no different than they were before the flood, and now there are fewer of them. Either you think this story is factually true, in which case God must have changed his mind. Or you think this is just a story, so at least the Bible's author has revised the religious law. In either case though, the reason for the revision is clear-- the new law benefits humanity more than the old law did.

God isn't the only important religious figure who revised the Judaeo-Christian religion. Jesus did too, in his Sermon on the Mount. Starting at Matthew 5:17, Jesus says:
Do not think that I have come to abolish the Law or the Prophets; I have not come to abolish them but to fulfill them.
He then proceeds to give nearly a dozen different examples of things people should do different from what the Torah (the law from Moses) instructs or permits. Here are some of these:
  • Insulting people is sinful
  • Looking at a woman lustfully is as bad as adultery
  • Divorcing a woman who has not been unfaithful is sinful
  • You should not seek retribution on people who have offended you
  • Love your enemies
The entire Sermon on the Mount was a revision of the Torah, or at least a reinterpretation. If Jesus said he didn't come to abolish the law and then proceeded to contradict the Torah, then the Torah must not be the law he was talking about. The simplest explanation is that the Torah is merely one interpretation of the laws "Love your God" and "Love your neighbor as yourself". This interpretation can be improved, and that's exactly what Jesus set out to do in his Sermon on the Mount.

If God was willing to revise his own law and Jesus could revise the law God gave to Moses, then everyone who believes the stories in the Bible must logically conclude that revising religious laws can be a good thing, as long as the new law does a better job of loving your God and neighbor. Sometimes the Bible got it wrong and needs to be improved with a good dose of common sense. If Jesus did it and Christians are supposed to act like Jesus, they should not be afraid to apply common sense to their religious text either.

And while I'm not a Christian anymore, this is the reason I believe that every Christian should be pro-gay. There is nothing about homosexuality that is inherently harmful to any human. It makes the couple happy, so logically there's no reason it should offend God. If there are rules against it in the Bible, then maybe those authors just didn't hear from God correctly and people should reconsider whether a ban on homosexuality actually embodies "love your neighbor". It's not at all loving to telling two consenting adults that they cannot marry each other just because they are of the same gender.

Monday, February 9, 2009

God of Stone

A few weeks ago, I found a Catholic religious pamphlet in a subway car extolling The Infant Jesus of Prague. According to the pamphlet, praying devotions to the statue will cause God to intervene on your behalf and bless you with prosperity. Most non-religious people would regard such claims with mild amusement and derision, but I found myself deeply offended. My issue was not with the mystical claims, but the encouragement to pray to a statue. It goes directly against one of the Ten Commandments, something all Christian denominations (including Catholics) uphold to this day.

The second command says, in abridged form, "do not make or worship any idol". This command was ostensibly from God himself. It doesn't say, "do not worship idols unless they are of me or my messiah". The text is very clear. So when the Catholic church simultaneously says "don't worship any statues" and "give reverence to the Infant Jesus of Prague", they are being logically inconsistent. More than anything, I am offended at the irrationality of the claims. The Catholic church cannot have it both ways. I'm sure they would claim that this is not really idolatry because the statue is of Jesus, the true God. But the actual text in the ten commandments is clear that no physical object should be worshiped, even ones that represent God.

When I went to church as a child, I was taught that the Bible's warnings against idolatry were really a warning against making other things more important than God. In other words, if you value your friends or your grades higher than God, then you've made those things into an idol. While this explanation makes logical sense given the premise that nothing is more important than God, it is fundamentally reinterpretive. When the authors of the Bible warned against idolatry, they really were warning against worshiping stone statues. Now some of the post-exilic writings contain threads reminiscent of the modern reinterpretation, such as the Book of Haggai.

But I wonder how much reinterpretation is too much. Is it really fair to say that the Torah really meant to say, "idolatry is making anything more important than God?" If that statement is true, why does it matter whether the Torah said it or not? The source of a statement doesn't matter as much as the content. It is better to let the ancient writings say what they say, and be honest about our revision. "We allow people to eat pork because there's no good reason to disallow it, even though the Torah says otherwise."

Ancient religions depend on modern reinterpretation to remain relevant today. Or put another way, religions that fail to remain relevant lose followers. I believe this is why there are so many different religions and sects. The religions that have survived for millenniums are those that are most tolerant of revision. Christians and even many liberal Jews have no problems eating pork or letting women wear pants, something that would have shocked Moses. Even one hundred years ago, interracial marriage was outlawed nearly every Christian church in America. I suspect that in another hundred years, the majority of Christian churches will marry gay couples. When a religious law no longer has practical benefits, the religion always revises itself, even if it's a century too late.

This makes me wonder, however, why religions are so willing to revise their laws but so hesitant to revise their gods. The status of legendary religious figures does change slowly over time though. For example, in pre-exilic writings, Satan is portrayed as a henchman of God whose job is to test humanity. After the Babyblonian exile, Satan becomes the adversary of God whose job is to destroy humanity. The clear concept of the messiah doesn't appear in Jewish writings until this time either, although post-exilic rabbis began to interpret the Torah as if it subtly implied a messiah. And the Christian church didn't official decide Jesus was actually God (not just the son of God) until a few centuries after his crucifixion. But in the grand scheme of things, the identity of a religious figure changes much slower than the interpretation of a religious law.

Perhaps the application of ancient laws to modern life produces a stronger discorde than imagining diefic powers operating in Manhattan. The revision of religious figures only seems to occur after the religion's followers as a whole face extreme persecution-- the Babylonian captivity, or Christian oppression under Roman emperors.

From my perspective though, I don't see much of the use of these religious figures. There's a lot of good philosophy in Christianity, and that is worth keeping. But the concepts of God, Jesus, and Satan don't have any real impact on my life. I used to imagine they did in the same way early Jews worshiped idols of Baal. God was my idol of stone, and I realized the good Christian philosophy didn't have anything to do with praying to Jesus. Revising my concept of God might shock modern Christians, but to me it's no different from revising prohibitions on pork. In the end, all religious philosophies evolve to become a set of rules that benefit society as a whole. Dieties aren't required.

Monday, December 15, 2008

First Causes

In recent years, I've come to realize the most formative event in my life occurred when I was three years old. My parents took us kids out on a walk, and for reasons that aren't clear to me, we didn't take the dog alone. My parents put him out on the deck so he could get fresh air, but kept him on a leash tied to the deck so he wouldn't run off and get lost. When we got back an hour later, I ran ahead to the house and discovered the body of our dog hanging from the deck. He had jumped over the railing to follow us and accidentally hung himself on his leash.

This would have been a traumatic experience for any child. I'm certain that if my parents had found the body first, they would have tidied things up a bit and told me a good lie to lessen the blow of what actually happened. But I found him first, and the memory is firmly etched in my mind. As my three year old mind tried to make sense of the situation, I was confronted by an overwhelming sense of loss and waste. I wasn't just sad that my dog had died. I was particularly hurt by how needless the death had been. If my parents had taken my dog along, left him in the house, or tied him to a low post on the deck staircase, none of this would have happened. I didn't blame my parents for not thinking of this. It was just an unlucky situation that could easily have been avoided.

I didn't recognize it until decades later, but that experience framed the rest of my life. It wasn't framed with an objective or a rationale, but with an emotion:

I despise waste.

I've lived my entire life in a constant struggle against inefficiencies and minimizing potential risks. I work hard to prevent problems before they occur. For example, when I get out of any car and shut the door, I always try the handle to make sure the door is locked. I always check that I bring my wallet with me when I take my keys and vice versa. If I have to be somewhere in an hour and it will take forty minutes to get there (including the margin of error), I find something to do for exactly twenty minutes.

I think this is why I was attracted to computer programming as a profession, and why I'm so good at it. Good programming means you can teach a computer to do menial tasks that would otherwise cost a human lots of time. I'm anal about error checking and commenting in my code because I absolutely do not want things to go wrong. Carefulness has become a way of life for me, so "clean" programming is second nature. Many programmers complain that writing good code takes a lot more time than sloppy code, but I don't agree. I've been writing good code so long that it's actually faster than writing "bad code".

As the story left off last week, I had finished writing the most impressive homing missiles ever. And for game balance reasons, I couldn't use the work the way I wanted to. Obviously this pushed my buttons about wasting time and effort, so I designed a Quake 3 game modification (aka "mod") using the homing missiles. Thus Seeker Quake was born. As this was released over eight years ago, it's a bit hard to find sites where you can download it, but I believe it's still available from this site.

The PlanetQuake article does a good job of summarizing the game, but here's the basic gist. Each player starts with a rocket launcher and gets one seeker shot active at any point in time. When you shoot, it selects a random person with higher score than you and mercilessly tracks them down. They can try to outrun it, but they absolutely cannot hide from the seeker. While your seeker is active, all other rocket shots act like normal rockets. And of course, you can use any other weapon available on the level too.

It's a pretty simple twist but it has a really interesting effect on game balance. I tried games with four to eight players on it of varying skills. On a typical level with a point limit of 20, the scores would probably range anywhere from -2 to 20. But in Seeker Quake, final scores were generally in the 13 to 20 range. The best player still pretty much always won and the worst player always lost, but the range was much tighter. Seekers act as handicapping mechanic, as the best player got targeted with a lot more seekers. He has to work harder to maintain his edge. Meanwhile, the worst players on the level have almost no seekers targeting them, so they have more time to catch up.

Here's the bottom line: When a bunch of players with widely varying skill play Seeker Quake, everyone has a good time and everyone is challenged relative to skill. The game is a total blast.

My life hasn't been all cupcakes and roses. But I've worked hard to turn bad situations into good ones. I'm glad I've taken the time to do so.

Monday, December 8, 2008

Building a Better Wizard

I write a lot about the artificial intelligence in BrainWorks, but that's not everything I program. Seven years ago I created a Quake 3 mod called Art of War. It's a class-based teamplay game, not so different from Team Fortress in that respect. Each class belongs to a faction with its own play style-- there was a faction with strong assault classes and weaker defenders, another with stronger defense and weaker assault classes, and so on. But even the heavily defensive faction needed one competitive assault unit. It just needed abilities that were "in theme" for the flavor of the faction. The resulting class, the Wizard, became a stealth attacker. When upgraded, it could blink through walls and turn invisible, making it ideal for hit-and-run guerilla warfare tactics but terrible for an all-out overrun of the enemy base.

Since each class had three upgrade levels, I was looking for a third ability for the Wizard that was in theme for the stealth assault play style. I decided to try homing missiles. The idea was that the wizard could shoot fireballs that would navigate through corridors and eventually home in on enemy structures. The wizard could assault the enemy base without even being inside it, forcing the defenders to track down the wizard and kill them. That's a very different gameplay situation from defending a swarm of attackers, and variety makes for good gameplay.

Given a choice between doing something the standard way and the excessive way, I usually choose excessive, and it was no different for these homing missiles. First let me explain the standard algorithm for homing projectiles.
  • Repeat every 50 milliseconds:
  • Check for possible targets in the missile's cone of view.
  • If there are no targets, keep heading forward.
  • Otherwise, turn a bit towards the target closest to the missile's forward direction.
That's about it. You can write that in 15 lines of code, and they are reasonably effective. The big problem with these missiles is their lack of memory. They can only latch onto a target that fits in their field of view. If the target can duck around a corner, the missile will forget about them and keep moving forward into a wall. There's no way you could lose track of a real opponent by stepping out of sight for an instant, but the homing missiles just don't know any better.

Not being content with the standard solution, I fortified homing missiles with some industrial strength awesome! These seeker missiles locked onto a target when it was fired and remembered that target's location (even if it went out of view). Obviously the seeker does a standard turn towards the target when it can get line of sight. When the seeker can't view the target, however, it employed the equivalent of sonar to get a navigational reading on its surroundings. The seeker would look in front of itself for walls, pillars, and other obstacles, trying to find large open hallways and rooms. It considered every direction that had sufficiently large space. From that list of possibilities, the seeker picked the direction that was most in line with the target's current position.

Now it's still possible for the seeker to get caught in a corner or alcove. Abstractly this is a local minima: something that appears to be an immediate improvement but is not useful for reaching the optimal goal. To combat this problem, the seeker kept track of how much free space it wanted. Remember, it only considers directions that have enough open space. By modifying thd definition of "enough", the seeker could control the tradeoff between getting to the target and getting to a wide-open area with lots of navigation options.

Whenever the seeker spent time turning, it increased the amount of space it wanted, making it favor open areas (and deprioritize finding the target). And whenever it went straight, it decreased how much space it needed (prioritizing finding the target). The result was that when a missile got stuck in a corner, it would start desparately searching for any hallway it could find, just to get somewhere else. When it found one, it would "relax" a bit and be a bit pickier in the next area. This is a form of simulated annealing.

This entire bit of code took about one week to write and was around 500 lines of code. I tested it for about an hour, tweaked some numbers, and it just worked. Watching it was incredible, as you could see the seekers do amazing things. They would seamlessly turn down narrow hallways with sharp turns, open doors, fly up and down ledges. Think "precise robotics control on a remote control missile" and you'll have the basic idea.

I was really excited to try this out in the next build of Art of War, because I wanted to see how this would affect the play style of the Wizard. To my dismay, it was a total disaster. The homing fireballs ended up being far too good. A wizard could shoot a fireball anywhere on the level and they would automatically find and kill enemy structures. In fact, a team of wizards could launch an assault by standing in their own home base, and there was just no way to defend against that. To balance the class, I had to remove all the cool homing missile code.

Of course, I wasn't willing to let good code go to waste...

Monday, November 24, 2008

Boot Camp

An old friend of mine was a former army drill sergeant. I was surprised to learn this, as he didn't fit the stereotypical personality at all. Thoughtful and friendly are not the first words that come to mind when you hear the phrase, "Drill Instructor". This former job came up when someone mentioned they were leaving for basic training (aka. "Boot Camp") in six weeks, and my friend immediately told the guy, "Start doing push-ups now." Apparently you do so many push-ups in boot camp that it's never to early to get in shape, and the better physical shape you are in, the easier it goes. Not that boot camp is ever easy.

Intrigued by learning that my friend had been a drill sergeant, I asked him about basic training from the sergeant's perspective. According to him, the most important thing you can do to survive boot camp is to be practically invisible. You need to blend into the crowd and never even be noticed by the drill instructors. The instructor's job is to spot recruits that don't fix the mix and make them fit in.

"The instructors don't hate you," he said. "It's just that nothing you've learned in civilian life is of any use in the army, and they need to beat that crap out of you."

The typical civilian doesn't take order very well. Generally orders are considered "loose guidelines". They're likely to ignore orders outright, only do the parts they want, or do things differently based on their preferences. That kind of attitude is acceptable (although not optimal) in real world business settings. In the army though, the lives of thousands if not millions of people are at stake, and defying orders can have large consequences that the enlisted units can't possibly be aware of.

If your army company controls a bridge, but your officer tells you to march one mile down a river and swim across at midnight, then that's exactly what you do. The option of taking the bridge across and walking down the other shore shouldn't even enter your mind, even though your officer never told you why he gave you your orders. Soldiers who disobey orders either get killed or get other people killed. That's why the number one objective of a drill instructor is to teach privates to follow orders.

There's a similar issue for AI in team play video games. If your team contains some bots or possibly "side kicks", the game designers generally give you some way to give them orders. No game designer considers making your minions sometimes ignore orders, of course. Rather, the problem is this: Just as most people aren't used to following order precisely, they also aren't used to giving them precisely. There's a whole set of problems determining what the player even intends when they say something as simple as, "go back through the door". Which door? How far back? So much of language depends on context and it's just hard to determining meaning from a small snippet of words.

The second issue is that no matter what order a player gives, they almost never want the bot to perform that task indefinitely and precisely. When you say "go guard the red armor", it's just until they spot an enemy. When the bot spots an enemy, you don't want it to let them get away if that enemy leaves the red armor spot. The bot should stop guarding and start attacking in that case. Similarly, if no one goes by the red armor for a while, the bot should stop guarding and find something useful to do. When the player says, "Guard the red armor", the bot needs to understand that as "Guard the red armor until you see an enemy or until you've been there for three minutes".

This is the basic strategy BrainWorks uses for its message processing, which is largely unchanged from the original Quake 3 message code. It does the best job to determine what the player means and treats that instruction as a relatively important high level task. But it's still just a guideline. Stuff like combat takes precedence, and the bot will forget about the order after a few minutes. Ironically the bots need to function like civilians rather than soldiers, or the person playing the game wouldn't be happy with how the bots would follow orders.

Monday, November 17, 2008

Ignorance Is Bliss

Having written twice about the dangers of believing everything you are told, I'd like to give some face time to the opposing argument:

Ignorance is bliss.

It's all well and good to say, "You should understand things, not just follow blindly." But to be pragmatic, there is only one time this is a serious improvement: when the blind belief is wrong. What about the times that blind belief is right? Humans survived for centuries without understanding how gravity truly worked and that didn't stop them from creating some awesome things. They were even able to use the fact that "things fall towards the ground" to great effect, such as building water clocks, without ever learning Kepler's laws of motion.

Even if someone did want to totally understand the world today, it wouldn't be possible. There is such a vast corpus of information that learning even 1% of it is literally impossible in the span of a human life. Mathematicians who are experts in their field rarely have the chance to keep up on other branches of mathematics, to say nothing of physics, chemistry, or medicine.

The fact of the matter is that most of the information we've been told is correct. If I get a bus that says "Downtown" as its destination, it really is going there. The driver could take it anywhere but I'm very certain that the destination is downtown. When I order a meal at a restaurant, I take it for granted that the cook can actually make the things on the menu and that I'll be served food that's reasonable close to the description provided. The waitress serves me food assuming that I will pay the price listed. I suspect that less than 1 in 10,000 people really understands what a computer does when you turn it on, but hundreds of millions of people use a computer every day. Understanding is a luxury, not a necessity.

Like it or not, our lives as humans are anchored in faith, not reason and understanding, and this is the cornerstone of the religion well all share: Causality. If we do something ten times in a row and got the same result, we expect that the eleventh time will produce the same result. And it does, even though we rarely know why. Understanding everything is impossible, and the whole purpose of culture is to provide structure so that everyone can use the discoveries other people made.

If that's the case, why bother thinking about anything at all? Why not let someone else do your thinking for you? The primary purpose of education isn't to give people information, but to teach them how to think. And most importantly, to teach them when to think. Thinking is really important in uncharted waters. In any situation that doesn't match your typical life experience, thinking will give you a much better idea of what to do than trying something at random.

Unsurprisingly, the same problem comes up in artificial intelligence. There's only so much processor time to go around. So if seven bots all need the same piece of information and they will all come to roughly the same conclusion, it's much faster to do a rough computation once and let them all use those results. This leads to an information dichotomy, where there's general "AI Information" and "Bot specific information". Each bot has its own specific information, but shares from the same pool of general information. In BrainWorks, all bots share things like physics predictions and the basic estimation of an item's value. If a player has jumped off a ledge, there's no reason for every bot to figure out when that player will hit the bottom. It's much faster to work it out once (the first time a bot needs to know) and if another bot needs to know, it can share the result.

These are the kinds of optimizations that can create massive speed improvements, making an otherwise difficult computation fast. If you think about it, it's not that different from one person researching the effects of some new medicine and then sharing the results with the entire world.

Monday, November 10, 2008

Social Mentality

I cannot help but comment on the results of the recent American presidential election, in which Barack Obama became the first non-white to be elected president. As a jaded American, I recognize that America has its share of both wonders and problems. But I have never been more proud of America than I was on this past election night. To me, the election of Barack Obama is symbolically a major victory in America's war against racism. And were Obama the Republican candidate and McCain the Democratic one, I would be no less overjoyed. Some things in life are more important than what percent of a candidate's positions we agree with.

I want to stress that this election is a victory over racism, not slavery. Most cultures in pre-modern times practiced slavery, but it was enslavement of people from the same ethnic background. Slavery in America and the Caribbean isles differed from other forms in slavery in that the belief was also coupled with a sentiment of racial superiority. So even after the abolition of slavery in 1863, the underlying tone of racism permeated much of American life. In contrast, the Britain empire outlawed slavery in 1833, but their enslavement wasn't particularly racially biased, so their past two centuries haven't been filled with racial tension. Abraham Lincoln won the war on slavery in 1865, but completely decimating the southern American states couldn't change the racist opinions that much of the country still retained. The election of Barack Obama to the office of President is proof that a large percent of America is now racially blind-- the ethnic background of a candidate is not a reason to select against them.

Of course, not all of America feels that way. That's why this election is a sign of major progress on the issue of racial discrimination, but it doesn't represent the end of it. Looking at the final electoral vote distribution, this election was heavily slanted towards Barack Obama but unanimous. For example, Alabama and Mississippi again voted for the conservative candidate as they've done for decades. But Virginia and North Carolina both voted for Barack Obama, two states that split away from America during the American Civil War because they wanted the right to keep slavery legal (among other things).

This might seem like a trite point, but each major population center in America has its own local way of thinking. People in Los Angeles are more liberal than people in Salt Lake City, for example. The southern states tend to be more conservative than New England states. And the way they vote is a reflection of their local societal beliefs. If this were not the case, every city and state would vote exactly the same way. There's nothing magical about the geography that makes people think in certain ways. The social mentality is a purely contained in the minds of all people in that local society, and if you think about it, that is an incredible thing.

For North Carolina to vote for a black man 147 years after it tried to secede from the United States, the entire cultural mentality had to change. It does not change quickly either. It wasn't until all the adults of that generation died, along their children and grandchildren, that the majority of the state decided that maybe blacks and whites were equals. That should be a sign of how easy it is to believe the first thing you're told and how hard it is to consider outside opinions.

More often than not, people belong to the political party and religion of their parents, quite apart from the actual merits of those positions. Over 90% of America is Christian and well under 10% of India is Christian, despite extensive missionary work. Even if there were strong, logical reasons to believe in the Christian religion, it's clear that those reasons are not why America is a Christian nation. If those reasons were so logically compelling, then India would be a Christian nation too.

Like it or not, if your political and religious views closely match your family's and city's views, the odds are high that you haven't thought very hard about them. That doesn't mean your beliefs are wrong, but if they are right, you probably don't know why they are. It means you accept the basic beliefs of society's "hive mind", and you've ceded some of your thinking.

While this can be dangerous, accepting society's beliefs without too many questions can be really helpful. I'm not advocating, "question everything", but, "question everything important". You might be wondering why that is, and what this has to do with Artificial Intelligence. Next week I'll explain what I mean.

Monday, November 3, 2008

Do Not Want!

I promised an in-depth explanation of how bots in BrainWorks perform dodging. But sometimes a picture provides the perfect overview:


That's the core of the algorithm. For whatever reason, this dog has decided he absolutely does not want to catch the ball someone threw at him, and the only thing his mind is thinking about is how he can safely get away.

When a bot needs to dodge a missile, the bot considers the eight possible directions it can move in (forward, backward, left, right, and each diagonal), plus the "dodge" of standing still. Any movement direction that collides with an incoming missile is considered completely off limits. Yes, it's bad to get "herded" where the enemy wants you to go, but that's a situation the bot can deal with later. There's no sense in eating a rocket now so that the bot might not take other damage later. Priority number one is always avoiding incoming damage.

So for each of the nine total dodge options, the bot computes what would happen if it moved in that direction for around a second. If a missile would hit the bot, it flat out ignores the option. It also has to check for some other problems. Obviously "dodging" into a wall isn't much of a dodge at all, so bots only dodge in directions with enough open space to move. And if the dodge is off a ledge, into a pit, then the bot also ignores that option. Those are the three reasons a bot will automatically disqualify a potential dodge direction:
  • Would hit a missile
  • Would hit a wall
  • Would fall down a pit
The bot might then dodge in any of the remaining dodge directions. There are two obvious and wrong ways the bot can choose which direction to move in.

Option 1: Move in the direction that gets the bot closest to its intended destination

The problem here is that the bot is very predictable. If you know where the bot wants to go and what route a missile shot cuts off, you know exactly where it will head next That makes it trivial to hit the bot in your next shot, and being predictable is not a luxury the bot has.

Option 2: Choose one of the remaining directions at random

This solves the predictability problem. But when you pick the remaining directions at random with equal probability, the bot's natural movement will be away from the missiles, without any regard for where the bot wants to end up. This random selection lets the attacker control the general direction the bot will move on, possibly keeping the bot from its destination indefinitely. In this situation, it's easy for the attacker to route the bot into a corner and finish it off.

The solution is to combine these two strategies:

Solution: Select at random, preferring directions that move towards the intended destination

Rather than assigned an equal chance of dodging in any direction, the bot analyzes all options and rates how well they help the bot reach its goal. The more helpful an option is, the higher the probability it will take it. However, any option could be taken. Over time, there is a high probability the bot will reach its goal, but its actual dodge direction is not predictable.

For example, suppose there is an incoming missile on the bot's east side, so it's unsafe to dodge east, northeast, or southeast. And suppose the bot would prefer to move northeast were the missile not in the way. The bot's dodge probability table might look like this:
  • North: 40%
  • Northeast: 0% (Ideal)
  • East: 0%
  • Southeast: 0%
  • South: 15%
  • Southwest: 5%
  • West: 15%
  • Northwest: 20%
  • Stationary: 5%
Once the bot knows where it wants to move, it keeps dodging in that direction for anywhere between three quarters of a second and a second and a half, at which point it chooses another direction (or possibly keeps going the same way). The result? A simple weighted probability function prevents the bot from running into missiles, being predictable, and being routed away from their goals.

Monday, August 25, 2008

Sitting on the Fence

For roughly the past month, I've been interviewing at a few different places looking for a new job. I've felt overqualified at my current place of employment and really wanted some more challenging work. At the end of the process I had two offers to choose from, both of which were really similar and far superior to my current job. I suppose most people might decide based on some simple, simple heuristic like "highest salary", "most attractive coworkers", or "did I flip heads?"

But I'm a compulsive optimizer. Naturally I agonized for two weeks over which offer I should. I was hoping one offer would be clearly better than the other, but both the corporate cultures and salary packages were similar. I analyzed every angle I could think of, from the commute times to little things my interviewers had said. I called friends I knew and got as much inside information about the places. I made risk estimates for the companies based on their respective industries. Talk about overkill! My opinion changed on a daily basis, but after two weeks I finally made my final decision.

This is, of course, the exact opposite of what good AI should do.

In most systems where a computer must analyze some data and make a decision, there's a problem when two choices are very similar. Sometimes the system can get caught in an oscillation state where working on either choice makes the other one better. For example, a Quake 3 bot might decide to pick up armor, but the act of moving towards the armor makes it decide that picking up some health is a better choice. So the bot instead moves towards the health, and on the way decides the armor is better. The bot can end up in a situation of indecision where it picks up neither of the two items, and clearly this is the worst of the three choices.

Item pickup isn't the only situation where this occurs. Target selection and weapon selection can have the same problems. A bot can try to chase down one target but on the way decide it would rather aim at a different one and waste valuable time aiming at neither. Or in theory a bot can switch to one weapon and change its range based on the new choice, but by the time it gets in position it decides another weapon is useful. If you've never seen a BrainWorks bot do this, that's great! I've worked hard to remove these indecisions from the AI, but it's neigh impossible to make AI that works in all situations. There are two basic tactics you can use to solve indecision oscillations, each with their own costs.

Most of the time I give the scoring estimate a bonus of between 10% and 25% for keeping the same decision. In other words, the AI won't change its mind unless the new option is at least 25% better than the old. The higher this factor is, the fewer oscillations will occur. However, the risk is that an option that's genuinely 8% better will be missed, even if it wouldn't cause an indecision loop. Additionally, it's rare but still possible that even with a 25% factor, the bot can get stuck in indecision. This method is best used for situations where the estimated value of one option over another doesn't change that rapidly. This is how general item pickup oscillations are handled, as well as enemy selection.

When a small buffer isn't sufficient, the other option is to prevent the bot from changing its mind until it completes its selection, or at least for a reasonably long duration (such as half a minute). This is guaranteed to solve the problem. But there are two drawbacks. Not only will the bot be more likely to make suboptimal decisions, but it could get caught in a situation where it's very predictable. As such, it's best to apply this tactic only when absolutely necessary and ideally when the action can be completed quickly.

This method is used in item pickup for items the bot is very nearby. If a bot can pickup an item in less than one second, it immediately overrides all other item pickups and grabs it no matter how useful some other item may be. It turns out that even with a bonus factor for keeping the same item selection, bots could still get stuck in indecision loops when they were very close to an item of low value. That's because the estimated points per second is the ratio of two very small numbers, so the estimation has a large margin of error. No bonus factor could stop the bot from switching between the bad nearby item and the good item that was far away. The code forces the bot to spend half second to grab the nearby item, thereby preventing the loop from occurring.

Regarding my new job, I was leaning towards the place that offered slightly less money but seemed to have higher job satisfaction. In the end they decided to beat the salary offered by the other company, so it was a no-brainer to pick the job that has both higher pay and happier workers.

Monday, June 30, 2008

A Peculiar Bug

In my postmortem of BrainWorks, I mentioned one of the big things I did right was creating a powerful debugging infrastructure directly in the code base itself. In general, it's easiest to test and maintain software when as much of the testing is fully automated as possible. Now that's easy for a program like Excel, where you can create a battery of sheets with strange formulas that might cause errors, and then confirm that the numbers all match up. If the software creates something measurable, it can usually be automatically tested.

Naturally, this means automated testing is far harder for Artificial Intelligence than other areas of computer science. The goal of BrainWorks is "bots that appear to be human". How do you teach a computer to measure whether a player's decisions appear to be human? That's tantamount to writing a program that can judge a Turing test, which is as hard as passing one. Fully automated testing of the AI isn't an option, but there are certainly things that can assist testing. All you have to do is identify an measurable component and check if it meets the human expectations.

For aiming, BrainWorks already measures accuracies with each weapon in each combat situation. The bot knows that it scores perhaps 35% with the shotgun against an enemy a few feet away and 5% against an enemy a hundred feet away. But it doesn't know if those numbers are reasonable until I tell it what to expect. The vast majority of the testing I did with aiming involved forcing a bot to use a particular weapon, making it play for hours, and then checking if the numbers "looked good". Generally they didn't, and I had to figure out why they weren't matching my expectations. In this testing process, I uncovered a variety of interesting bugs. Large sections of the aiming algorithm were rewritten around a dozen times trying to get things right. I found several errors in how accuracy was even being measured. But the strangest error I encountered was something totally unexpected.

I was monitoring Railgun accuracy as way of testing the overall ability to aim. It's an instant hit weapon with no spread and infinite range, so it's a great initial test case. I loaded up eight bots on a wide open level and forced them all to have exactly the same skill, then ran them for several hours. Curiously when I checked the results, their accuracies weren't all the same. The best had an accuracy around 75% and the worst was around 65%. Moreover, their scores reflected this.

I activated some mechanics in the system to modify aiming behavior. First I turned off all errors, so the aiming should be flawless, like watching a human who doesn't make mistakes. Their accuracies were still stratified. Then I completely circumvented the entire aiming code, forcing the bots to aim directly where they wanted to. That gave the bots what most people think of as an aim hacking, so their aim should have been perfect. But even still, testing showed that some bots would get higher accuracies than others. Sure, there was one bot that would score 99.9% accuracy, but another bot would only score 97%. When a bot has perfect information and reflexes, it should not miss one in 30 shots.

Then one day I noticed that all eight bots were sorted in alphabetical order. The bot with the first name had the highest score (and accuracy) down to the bot with the last name having the lowest score. Since the odds of this are 1 in 8! = 40,320, I considered this curious but still possibly a coincidence. So I tested it again, and each time the bots were sorted alphabetically! That was the final clue I needed to isolate this bug.

The script I used to start testing adds the bots in alphabetical order, so I tried swapping the order different bots were added and their accuracies changed as a result. Each time, the most accurate bot was added to the game first and the least accurate bot was added last. For some reason, the internal numbering of bots was affecting their aim.

So why exactly was this the case? I'll let you puzzle over it for the week. Next week I'll explain why this happened and the extreme amount of work that went into solving it.

Monday, June 9, 2008

Making Things Right

You do it thousands of times each day without even thinking about it. It happens when you put on your clothes, brush your teeth, eat your food, and open a door. Every time you walk down a hallway, drive down the street, or type at your computer, your brain is rapidly, subconsciously processing all kinds of data. Most of the information is uninteresting and gets discarded. But when the brain encounters little bits of unexpected data, for the most part it seamlessly reacts, performing minor adjustments to correct whatever mistakes were made in fine motor control. Someone might brush your shoulder as they pass you on the street, and without even thinking about it you step to the side. Or you see the car in front of you start breaking and you slow down as well. You don't swallow your food until the pieces feel sufficiently chewed in your mouth.

Somehow, the human brain performs thousands of minor error corrections while performing basic motor controls. Large motions are fast but they aren't precise. How exactly does this work? If you mean to move your hand 10 centimeters and actually move it 11, your brain compensates for the extra centimeter and has your hand move back. Well maybe it only moves 9 millimeters instead of 1 centimeter, but it gets close enough that it doesn't matter.

When I discussed how BrainWorks emulates human wrist motion, I explained this elegant mathematical model where errors accumulate based on muscle acceleration. But I brushed over the other half of this system, where the bot accounts for its error and readjusts. With error correction that's not good enough, a bot's aiming just looks erratic. And with error correction that's too accurate, the aiming looks robotic. There's a very fine balance to getting error correction that looks human, and to do that you need to understand how the human brain corrects muscle errors. There's just one problem with that...

I have no idea how it happens.

There have been numerous studies on this kind of thing, but I confess I haven't read them. Maybe they would have been useful; maybe they wouldn't. Instead, I tried a number of different methods for error correction (that number is 5), and only one of them produced realistic results. While I can't tell you how the human brain solves this problem, I can explain the one that worked.

From the bot's perspective, here's the problem. It thinks it's aiming at position X, but in reality it's aiming at X+e, where e is the error factor. As the bot corrects its error (ie. understands the mistake it has made), the error factor e approaches zero. An error factor of zero means the bot thinks it's aiming at X and is actually aiming at X+0 = X, meaning it perfectly understands where it is aiming. Most of my failed attempts used some form of exponential or linear decay. Ironically, the algorithm that worked is by far the simplest. It's just a canonical monte carlo algorithm:
  1. Set the error value to a random number between -e and +e
  2. There is no step 2
It's just that simple. You can read the full explanation behind this algorithm in the DataPerceiveCorrect() function in ai_view.c. It's a 100 line comment describing, quite literally, one line of actual code. Here's the body of the function:
float DataPerceiveCorrect(float estimate_offset)
{
// Pick a new value in (0, +error) or (-error, 0)
//
// NOTE: It doesn't matter what the sign of error is; the random
// function will preserve it.
return random() * estimate_offset;
}
While I won't reproduce that entire 100 line description here, here's a portion of it that explains in more detail why this works:

Humans seem to use the following algorithm for refining estimates. For example, finding the dictionary page that contains a given word.

1) Make a reasonable estimate given the human's understanding of the situation. For example, even though W is the 23rd letter of the alphabet, humans don't look at the (23/26) * MAX_PAGES page of the dictionary when looking up W words, simply because they know the X-Y-Z sections are so short. This indexing is similar to the Interpolation Search algorithm. This result is compared to the actual value (ie. is the guess too high or too low?) and this value is fixed as either an upper or lower
bound. In other words, you mark this page with your finger.

2) Possibly make one or two more reasonable estimates to determine both an upper and lower bound that the data must reside in. At this point the human knows that lower < value < upper. He or she knows the precise values of "lower" and "upper" but NOT of "value".

3) Pick a random value in the interval (lower, upper) and compare it to "value". This selection replaces either the lower or upper bound as necessary. Repeat until the selection equals "value".

This might seem unintuitive, but humans don't actually use binary search to narrow down their errors when the range gets sufficiently small. Perhaps it takes too much time to roughly estimate the middle. In practice people will flip through maybe 10 pages at a time, or 1 page at a time, or just pick something and see. It will take more iterations to converge than binary search would but-- and this is crucial-- it takes less time overall than computing the midpoint at each iteration.
That is it say, a computer's most natural method of search is not the same as a human's. Computers usually operate best with binary search while humans "just try something". Only when I programmed the AI to do something counter-intuitive (for a computer) did the AI seem human.