Wednesday, January 23, 2008

Two Kinds of AI

There are two kinds of artificial intelligence problems. In both situations, the AI has data describing its world, but the two kinds of problems require vastly different solutions. In a video game, the AI can have perfect information about, well, everything. You can find out the exact location of anything, how fast it's moving, and pinpoint sound effects. The server can even give the AI perfect information about its enemies, like their current health and weapons.

Contrast this with AI in a real world setting, like creating an AI that drives cars:



Now the AI has a whole different set of problems. Your car might have a lot of sensors, but there still can be measurement errors. The data might not be in the ideal format-- in a game, the coordinates of all nearby objects are encoded in three dimensions. But an AI to drive a car would have to extract those coordinates from two dimensional camera images. And if game AI takes too long to think, the worst that happens is a slower server. If your car's AI thinks too long while driving on the freeway, it could get into an accident!

Of course, while writing AI for simulated environments like games is much easier than real world applications, it has a different set of challenges. A bot that always knows where you are and aims perfectly isn't much fun to play against. To make a fun bot, the AI should only use data a real player would have access to.

The challenge of real world AI is computing accurate data.
The challenge of simulated world AI is determining which pieces of data to use.

So when I tackled the problem of selecting which enemy a bot wants to attack, you can imagine the kind of challenge I faced. The simple and obvious solution-- considering each player in the game-- was simply incorrect because it used data the bot absolutely should not have access to. To solve this problem, I wrote an environment scanning engine for BrainWorks (ai_scan.c). Ironically, the purpose of this module is completely different from the scanning a car's AI would do. The BrainWorks scanning engine "fuzzes up" the data so it's less accurate, or simply ignores things the bot shouldn't know about. While it would be nice for bots to perform this task the same way humans do, in this case the bot approaches the problem in the exact opposite manner.

The bot starts by checking everything in the game and testing if it can see it, by testing that no walls are in the way, the bot is facing the thing, and the thing isn't too far away. The bot records all targets it sees in its awareness list (ai_aware.c). If the bot hasn't seen a target yet, there is a short delay before the target is "active" in the list, meaning the bot has officially spotted the target. This is analogous to time it takes humans to react to sudden changes. Furthermore, the bot will remain aware of this target for a few seconds even if they hide behind a corner. Just like a bot can't know about a target behind a corner it hasn't seen, the bot must know about that target if they have seen them. That enables the bot to chase after the enemy, should they run away.

And of course, there are other things the bot scans for. If it sees incoming missiles, it notes them as well so it can dodge out of the way. Sound effects help the bot identify enemies that can be heard but not seen, and so on. Once the bot has a list of enemies, it analyzes them to decide which it should attack. (Generally this is whichever enemy the bot has the easiest time shooting.)

All that work just to figure out who the bot wants to shoot at! The bot might have thought about the problem differently from humans, but it came to the same results. Sometimes good AI means making the AI play worse instead of better.

1 comment:

Andrew said...

Since my final year project is doing remote control car AI via. a PC operating the actual controller, and relying on a webcam to get images of the tract, I certainly feel the pain compared to how easy it is to get information in a computer game!

I agree - not knowing things is difficult in some respects. You have to sometimes even build memory, just so the actors/NPC's don't forget the player just ran around the corner.

It's interesting, "dumbing down" information to be more human like. Twitch games, I think, have it worst - tactical games have it better. Simply put; since a computer can excel at accuracy, they can always beat humans. On the other hand, out thinking the AI in a RTS or TBS is much more equal - and in fact, in favour of the player so often that the developers liberally cheat with FOW removal and more gold for the AI.

In more tactical shooters it changes - especially ones where guns simply are not 100% accurate, or near to it. Given FEAR has a pretty good squad simulation, you can get AI which works things out and, while elite, a player can still kill them, since it is more of a thinking puzzle then a simple case of "run in and get an instant headshot" - the fact is, FEAR NPC's retreat, throw grenades, etc, so it's pretty different to many FPS's.

And another aspect of that, of course, is AI squadmates - either who liberally die (commonly in Halo killing themselves...), or are invincible (CoD4, or near to it in HL2). Improving their AI is paramount, but needs some of the same constraints - but yet, if they are the same as the enemy, then since they're fighting a lot more enemies, the odds are they'll die through sheer amount of bullets going everywhere!