Wednesday, January 2, 2008

Artificial Causality

Previously I said that causality was the foundation for BrainWorks. Since the objective of the AI was to produce bots that seemed human-like, the bots needed to do things humans would do at the same times humans would do them. That "at the same time" restriction is particularly important. A human player will pickup health, armor, weapons, and ammo during a game, and it's not enough to make a bot that also picks up those four kinds of things. If the bot decides to pickup an extra weapon when they are low on health, or armor when they are healthy but low on ammo, then they aren't playing very effectively. It will be obvious to their human opponents that the bots are making mistakes when they die so easily or just don't have weapons to put up a good assault.

When a player picks up health, they do it for a reason: they don't want to die. The proper time is to pickup health is when the player is in danger of dieing. So if a bot is going to do the same things players do at the same times, then:

Bots must do things for the same reasons players do them.

That means making great bots for a first person shooter game comes down to understanding why human players do the things they do. Why do players miss more often against dodging players? Why do players time the respawns of armor but not health or ammo? Why do players choose to fight with a rocket launcher sometimes and a machine gun other times? You have to understand how you play the game before teaching someone else how to play. If only the external details match but not the internal decision process, it's very easy for the bot to appear inhuman, picking up ammo when its low on health and so on.

This intense focus on well understood focus causality automatically disqualify some important AI tools however: genetic algorithms and neural networks. You can think of these things as black boxes with fixed inputs (eg. your location, what enemies are nearby) and outputs (eg. where to aim, whether to shoot). The boxes try things at random and their effectiveness is graded. Then the most successful ones are mixed together and the process repeats. Eventually you get one black box that does a good job of solving a particular problem with well defined inputs and outputs. For many applications, genetic algorithms are great. You can get 90% or better effectiveness without really knowing how the AI works. In practice that means that 9 times the AI does the right thing and 1 time they do something stupid, but it all washes out in the end.

The problem with using genetic algorithms in BrainWorks is that humans notice a flaws much more than correct decisions. As the ultimate objective is bots that feel human, a bot that does ten things 90% well meets that objective. But a bot that does nine things perfectly and one really stupid thing doesn't.

In effect, writing BrainWorks was teaching a computer how I, Ted Vessenes, play first person shooter games. And the teaching required me to learn for myself why I make certain subconscious decisions. In the upcoming posts, I'll dive into a few specific applications. Up next week: Weapon Selection.


DKo5 said...

You managed to sum up in a few paragraphs (quite well, I might add) why every single FPS bot I've ever played was never convincing, and how to change that.

I am currently re-installing Quake 3 to give BrainWorks a try! :D

Ted Vessenes said...

To be fair, actually determining the reasons why humans play a game a certain way is very hard. If it wasn't, the game wouldn't be very interesting. When I go over weapon selection, the simplest example, it will be clear how complicated this process is. Just wait to see how the aiming works though! That is some serious dabbling in the dark arts!

Anonymous said...

I think you should really concentrate on the bot's survival more than enhancing it's accuracy and just timing items.

::Bot AI::
-evaluates where to go based on: (Stack, Current Weapons, Enemy Weapons, Enemy Stack, and Enemy last seen position) -note: this basically gives it survival from the start of the match, and keeps it from taking bad fights.
-ambushes: waiting or hiding at major items to draw the player in for damage
-rocket jumping: giving it the ability to reach the unreachable (items,weapons)
-strafe jumping: the ability to move around a little bit faster
-map control:
(It is broken down into a few fundamentals)
-Combat player decisions: fighting the player with weapons that he doesn't have, ie. railgun from a distance if the player doesn't have railgun. Or even better, lightning gun at medium range if the player only has rockets. Rail\Deal damage at major items if the player is trying to make a steal.
-Chase the player if the player hasn't got more than 100hp and 100armor
-Stop the chase if the player deals enough damage to even the stacks
-Ambush the player's exit routes if possible
-out of control:
-Evaluate the danger: if the player is higher stacked in HP\Armor, try and give it a heads up to stay away from him. Sometimes telling it to wait where it is until the player gets near him is good enough.
-Make item steals if the item timing for mega health and red armor are split. This is also based off the player's location on red or mega health.
-Hit and runs, don't let the bot just chase you... give it street smarts bro. If it takes too much damage when its low on hp\armor, it will simply die in the fight. If it gets some damage, tell it to run away.

So basically, much of the things I listed are already programmed in Spiterbot. However, spiterbot is also sort of dumb at times. I see the possibility of improving bots in Quake just by giving it survival skills, its not even really about combat anymore. The bots need to know what weapons to use in combat, but the bot most importantly needs to know certain variables before deciding to fight or go to get guns\armor\hp. It would be interesting to see the final product if you were to try and give it some of this logic.