Monday, June 9, 2008

Making Things Right

You do it thousands of times each day without even thinking about it. It happens when you put on your clothes, brush your teeth, eat your food, and open a door. Every time you walk down a hallway, drive down the street, or type at your computer, your brain is rapidly, subconsciously processing all kinds of data. Most of the information is uninteresting and gets discarded. But when the brain encounters little bits of unexpected data, for the most part it seamlessly reacts, performing minor adjustments to correct whatever mistakes were made in fine motor control. Someone might brush your shoulder as they pass you on the street, and without even thinking about it you step to the side. Or you see the car in front of you start breaking and you slow down as well. You don't swallow your food until the pieces feel sufficiently chewed in your mouth.

Somehow, the human brain performs thousands of minor error corrections while performing basic motor controls. Large motions are fast but they aren't precise. How exactly does this work? If you mean to move your hand 10 centimeters and actually move it 11, your brain compensates for the extra centimeter and has your hand move back. Well maybe it only moves 9 millimeters instead of 1 centimeter, but it gets close enough that it doesn't matter.

When I discussed how BrainWorks emulates human wrist motion, I explained this elegant mathematical model where errors accumulate based on muscle acceleration. But I brushed over the other half of this system, where the bot accounts for its error and readjusts. With error correction that's not good enough, a bot's aiming just looks erratic. And with error correction that's too accurate, the aiming looks robotic. There's a very fine balance to getting error correction that looks human, and to do that you need to understand how the human brain corrects muscle errors. There's just one problem with that...

I have no idea how it happens.

There have been numerous studies on this kind of thing, but I confess I haven't read them. Maybe they would have been useful; maybe they wouldn't. Instead, I tried a number of different methods for error correction (that number is 5), and only one of them produced realistic results. While I can't tell you how the human brain solves this problem, I can explain the one that worked.

From the bot's perspective, here's the problem. It thinks it's aiming at position X, but in reality it's aiming at X+e, where e is the error factor. As the bot corrects its error (ie. understands the mistake it has made), the error factor e approaches zero. An error factor of zero means the bot thinks it's aiming at X and is actually aiming at X+0 = X, meaning it perfectly understands where it is aiming. Most of my failed attempts used some form of exponential or linear decay. Ironically, the algorithm that worked is by far the simplest. It's just a canonical monte carlo algorithm:
  1. Set the error value to a random number between -e and +e
  2. There is no step 2
It's just that simple. You can read the full explanation behind this algorithm in the DataPerceiveCorrect() function in ai_view.c. It's a 100 line comment describing, quite literally, one line of actual code. Here's the body of the function:
float DataPerceiveCorrect(float estimate_offset)
// Pick a new value in (0, +error) or (-error, 0)
// NOTE: It doesn't matter what the sign of error is; the random
// function will preserve it.
return random() * estimate_offset;
While I won't reproduce that entire 100 line description here, here's a portion of it that explains in more detail why this works:

Humans seem to use the following algorithm for refining estimates. For example, finding the dictionary page that contains a given word.

1) Make a reasonable estimate given the human's understanding of the situation. For example, even though W is the 23rd letter of the alphabet, humans don't look at the (23/26) * MAX_PAGES page of the dictionary when looking up W words, simply because they know the X-Y-Z sections are so short. This indexing is similar to the Interpolation Search algorithm. This result is compared to the actual value (ie. is the guess too high or too low?) and this value is fixed as either an upper or lower
bound. In other words, you mark this page with your finger.

2) Possibly make one or two more reasonable estimates to determine both an upper and lower bound that the data must reside in. At this point the human knows that lower < value < upper. He or she knows the precise values of "lower" and "upper" but NOT of "value".

3) Pick a random value in the interval (lower, upper) and compare it to "value". This selection replaces either the lower or upper bound as necessary. Repeat until the selection equals "value".

This might seem unintuitive, but humans don't actually use binary search to narrow down their errors when the range gets sufficiently small. Perhaps it takes too much time to roughly estimate the middle. In practice people will flip through maybe 10 pages at a time, or 1 page at a time, or just pick something and see. It will take more iterations to converge than binary search would but-- and this is crucial-- it takes less time overall than computing the midpoint at each iteration.
That is it say, a computer's most natural method of search is not the same as a human's. Computers usually operate best with binary search while humans "just try something". Only when I programmed the AI to do something counter-intuitive (for a computer) did the AI seem human.


Surv said...

Interesting post. Do you suppose it would work just as well with the gauntlet, or a melee-based weapon? I gather movement would come into the equation there as well. What is your idea on this?

Ted Vessenes said...

I don't see what the difference would be with a melee weapon. Internally, the AI just considers a melee weapon to have an extremely short range (40 units, I believe, as opposed to 768 for a lightning gun).

Regarding movement, it's true that it affects your aiming, due to parallax. There's some code in the AI to handle this problem, although I didn't mention it. And the parallax issue applies to all weapons, not just melee, because it depends the distance to your target, not the range of your weapon.

surv said...

I am asking this because in an open-source game based on the q3 code there is one team using almost solely melee-based attacks in combination with walljumps and wallwalking. This means that to even come close their opponents they need movement and once there they need to keep moving or get killed.
Now some people from the community were succesful enough in implementing reasonable ai for the human team, but the aliens, the melee-based team has proven more difficult. Only able to use the ground floor with this basic ai without any of the advanced abilities and lacking range they are fodder for anything. I was hoping to gain some insights for the coders of that project but it isn't a top priority of the game anyway. Thank you

Ted Vessenes said...

It sounds like you're talking about Natural Selection, a team based FPS/RTS hybrid based on Gloom for Quake 2.

Coincidentally, around 3 years before Natural Selection was released I wrote a precursor mod in the same genre, named "Art of War". While I'm biased, I think the design is far superior. There are no exponential growth issues and there are four factions instead of two, each with their own distinct play styles, strategies, and sub-strategies.

Anyway, the whole reason I started AI programming was the trouble I had getting players for the mod. It's easy to bootstrap a community for a mod that only requires 2 or 3 players. But when you've designed a team based game with interesting class synergies, the game doesn't really start to get interesting until you hit 4v4, and it's best in a 6v6 to 8v8 setting. Requiring 8 players just to get started was too much, so I wanted to write AI to fill in the slots. That way you could have 2 or 3 people play a reasonable game with bots while waiting for more human players to join the server.

So early on I had to decide whether to focus on AI specifically for Art of War or general deathmatch AI. In the end I settled on the latter because the standard Quake 3 AI was in such dire need of work that programming higher lever strategies would have been meaningless. Some day I might go back and add AI to Art of War, but I kind of want a break for now.

Getting back to your question of adding AI to handle melee combat for aliens, here is the core problem. Melee combat is more about movement than attacking. If you are in range of someone, it's easy to do a lot of damage. Ranged combat requires aiming, melee combat requires navigation. And if you've been reading my earlier posts, you'll remember that Q3's navigation system needs a lot of work. A "this is my PhD thesis" quantity of work. It's certainly solvable, but I don't recommend trying it.

Another related issue is that the more dynamic methods of movement, such as walking on walls, don't work at all in Quake 3's concept of navigation, so all of that will have to be reworked from scratch as well. If you really want AI for these games, write it for the Humans race only and force the Alien team to be (ironically) all filled with live human players.

Surv said...

The game I was talking about is actually offspring of gloom. It's called tremulous and has gone standalone over a year ago and is currently enjoying a reasonably sized playerbase. I notified them of your work when I heard of it on my other main site q3w.

Sadly, their current priorities just aren't AI because they now do have the luxury of a playerbase after the renowned low playercount of being a q3-beta stage mod.