There are two kinds of artificial intelligence problems. In both situations, the AI has data describing its world, but the two kinds of problems require vastly different solutions. In a video game, the AI can have perfect information about, well, everything. You can find out the exact location of anything, how fast it's moving, and pinpoint sound effects. The server can even give the AI perfect information about its enemies, like their current health and weapons.
Contrast this with AI in a real world setting, like creating an AI that drives cars:
Now the AI has a whole different set of problems. Your car might have a lot of sensors, but there still can be measurement errors. The data might not be in the ideal format-- in a game, the coordinates of all nearby objects are encoded in three dimensions. But an AI to drive a car would have to extract those coordinates from two dimensional camera images. And if game AI takes too long to think, the worst that happens is a slower server. If your car's AI thinks too long while driving on the freeway, it could get into an accident!
Of course, while writing AI for simulated environments like games is much easier than real world applications, it has a different set of challenges. A bot that always knows where you are and aims perfectly isn't much fun to play against. To make a fun bot, the AI should only use data a real player would have access to.
The challenge of real world AI is computing accurate data.
The challenge of simulated world AI is determining which pieces of data to use.
So when I tackled the problem of selecting which enemy a bot wants to attack, you can imagine the kind of challenge I faced. The simple and obvious solution-- considering each player in the game-- was simply incorrect because it used data the bot absolutely should not have access to. To solve this problem, I wrote an environment scanning engine for BrainWorks (ai_scan.c). Ironically, the purpose of this module is completely different from the scanning a car's AI would do. The BrainWorks scanning engine "fuzzes up" the data so it's less accurate, or simply ignores things the bot shouldn't know about. While it would be nice for bots to perform this task the same way humans do, in this case the bot approaches the problem in the exact opposite manner.
The bot starts by checking everything in the game and testing if it can see it, by testing that no walls are in the way, the bot is facing the thing, and the thing isn't too far away. The bot records all targets it sees in its awareness list (ai_aware.c). If the bot hasn't seen a target yet, there is a short delay before the target is "active" in the list, meaning the bot has officially spotted the target. This is analogous to time it takes humans to react to sudden changes. Furthermore, the bot will remain aware of this target for a few seconds even if they hide behind a corner. Just like a bot can't know about a target behind a corner it hasn't seen, the bot must know about that target if they have seen them. That enables the bot to chase after the enemy, should they run away.
And of course, there are other things the bot scans for. If it sees incoming missiles, it notes them as well so it can dodge out of the way. Sound effects help the bot identify enemies that can be heard but not seen, and so on. Once the bot has a list of enemies, it analyzes them to decide which it should attack. (Generally this is whichever enemy the bot has the easiest time shooting.)
All that work just to figure out who the bot wants to shoot at! The bot might have thought about the problem differently from humans, but it came to the same results. Sometimes good AI means making the AI play worse instead of better.