Previously I said that causality was the foundation for BrainWorks. Since the objective of the AI was to produce bots that seemed human-like, the bots needed to do things humans would do at the same times humans would do them. That "at the same time" restriction is particularly important. A human player will pickup health, armor, weapons, and ammo during a game, and it's not enough to make a bot that also picks up those four kinds of things. If the bot decides to pickup an extra weapon when they are low on health, or armor when they are healthy but low on ammo, then they aren't playing very effectively. It will be obvious to their human opponents that the bots are making mistakes when they die so easily or just don't have weapons to put up a good assault.
When a player picks up health, they do it for a reason: they don't want to die. The proper time is to pickup health is when the player is in danger of dieing. So if a bot is going to do the same things players do at the same times, then:
Bots must do things for the same reasons players do them.
That means making great bots for a first person shooter game comes down to understanding why human players do the things they do. Why do players miss more often against dodging players? Why do players time the respawns of armor but not health or ammo? Why do players choose to fight with a rocket launcher sometimes and a machine gun other times? You have to understand how you play the game before teaching someone else how to play. If only the external details match but not the internal decision process, it's very easy for the bot to appear inhuman, picking up ammo when its low on health and so on.
This intense focus on well understood focus causality automatically disqualify some important AI tools however: genetic algorithms and neural networks. You can think of these things as black boxes with fixed inputs (eg. your location, what enemies are nearby) and outputs (eg. where to aim, whether to shoot). The boxes try things at random and their effectiveness is graded. Then the most successful ones are mixed together and the process repeats. Eventually you get one black box that does a good job of solving a particular problem with well defined inputs and outputs. For many applications, genetic algorithms are great. You can get 90% or better effectiveness without really knowing how the AI works. In practice that means that 9 times the AI does the right thing and 1 time they do something stupid, but it all washes out in the end.
The problem with using genetic algorithms in BrainWorks is that humans notice a flaws much more than correct decisions. As the ultimate objective is bots that feel human, a bot that does ten things 90% well meets that objective. But a bot that does nine things perfectly and one really stupid thing doesn't.
In effect, writing BrainWorks was teaching a computer how I, Ted Vessenes, play first person shooter games. And the teaching required me to learn for myself why I make certain subconscious decisions. In the upcoming posts, I'll dive into a few specific applications. Up next week: Weapon Selection.