Monday, March 17, 2008

Release Retrospective

While people have been overall pleased and impressed with the BrainWorks release, I've received one common bit of negative feedback. Many people have said the bots use the shotgun too often, especially in situations where it's flat out the wrong weapon to use. Now I don't have the luxury other people have of saying, "that's not a bug, that's a feature!" Or more commonly, "that's working as intended." BrainWorks is intended to feel realistic. If many people say a portion of it doesn't feel realistic, they are right. I have no grounds on which to disagree. There really is a bug in the code because you say there is.

Of course, if you've ever been told quite literally, "this thing doesn't feel right, so go fix it," you know how challenging of a request that is. One downside of fundamentally basing the BrainWorks AI on causality is that debugging is extremely difficult. There's no magic number that tells the bot how often to use the shotgun. Instead, the bot analyzes its situation and incorrectly decides the shotgun is the right weapon to use. To solve this problem, I need to walk through the entire analysis and see how it reaches that conclusion. To make matters worse, the bots don't always choose to use the shotgun. And there are surely situations where the bot correctly chooses the shotgun. It's very hard to isolate the situations where the mistake as made and then narrow down why that mistake occurred.

My solution was to write some debug code that outputted all the data the bot used at each step of weapon selection analysis, starting from accuracy and fire rate data and ending with its final valuation of how quickly a given weapon would score a kill. I had it output this data for all weapons available and just stared at the numbers, checking if each data point was an accurate value of the concept representing it. I found three key contributing issues, two of which are solvable.

#1) The estimation of weapon fire rate was incorrectly bounded

Knowing how frequently the bot will fire a weapon is crucial to determining the actual damage rate of the weapon. If a bot spends 3 seconds aiming at a target and only fires for 1 of those seconds, it will do one third the damage as a bot that spends all 3 seconds aiming-- provided it's always lined up for a good shot. The bot tracks how often it actually fires the weapon, but I gave this value a lower bound of 50%. In other words, the bot assumes it will spend at least 50% of its time firing with the weapon, possibly more.

I added this assumption to deal with another problem. Bots would switch to a weapon they had never fired before, not have a perfectly lined up shot, and think... "I've spent 0 seconds firing and 100 milliseconds aiming, so I spend 0% of my time attacking. This weapon is terrible. I'm going to use something else." And they would never try out the weapon to see that they could in fact make shots with it. One problem with relying on historical data is wide variance in the initial data sets. I ran some tests with most of the weapons and found the fire rates were in the 60% to 80% range, so 50% seemed like a good bound.

The problem was that the shotgun only had a fire rate of between 30% and 50%, meaning it's hard to line up good shots with the weapon. There were situations where the bot would think it fired 50% of the time when in reality it had only attacked 30% of the time, which inflated the estimated value of the shotgun by a solid 60%. (160% of 30 is 50.) So it's no wonder they thought the weapon was good. Lowering the minimum bound to 30% had other problems though. When that happened, sometimes bots would stop using genuinely good weapons like the rocket launcher. That's a sign that a lower bound is not the correct way to solve this problem.

Related to this is the second issue:

#2) The estimation of weapon fire rate doesn't take location into account

If you're wondering why the shotgun is generally a bad weapon, it's because it's so situational. At point blank range, it can do more damage than the railgun in two thirds the time. But the spread on the weapon makes its value decrease rapidly at medium and long range. There are situations where the shotgun is good, but not many of them. In contrast, weapons like the railgun are consistently good in a wider variety of situations.

This means that a bot might shoot more often with the shotgun at point blank range than at long range because the shots are easier to line up. If the bot had a 40% fire rate with the shotgun, that might be a 50% rate when close to the enemy but only 30% when far away. Similarly, it's easy to line up shots with the rocket launcher when the target is below you, since you can just shoot at the floor. If the target is above you, you need a direct hit, and that's very hard.

If the bot knew how the firing rate could change depending on the combat situation, it would have a much better understanding of whether a weapon is a good choice against the current target. Unfortunately, the code only tracks a single firing rate for each weapon. The concept of where the enemy was located doesn't factor into the data, and that's a real issue.

The drawback of tracking fire rate data across each of the bot's 12 combat zones is that it will take much longer to get good estimates on actual fire rates. If there were problems before with bots accumulating fire rate data into one data "bucket", now that there are 12 "buckets", it will take 12 times longer before the data stabilizes. In other words, the issue that the 50% minimum fire rate was trying to address has now gotten 12 times worse.

I'm still thinking about the best way to solve this problem, but I'm leaning towards seeding the data with some reasonable estimates of how often the weapons really should be firing. If there's enough seed data, it should encourage the bot to act reasonably until it has enough of it's own data to make conclusions. That just leaves one more issue:

#3) Bots do not take into account how a good choice now could be bad later

The major drawback a situational weapon like the shotgun has is that when you use it, you give your opponent some control. They have the power to make your weapon worse just by backing up. While this is a concern for all weapons, the more situational a weapon is, the worse it is when your opponent exploits your weapon's weakness. In other words, the shotgun isn't just bad because it's only useful at close range. It's bad because your opponent can choose to be at medium or far range after you spend the time to pull out the weapon.

This is unfortunately a much harder problem to solve, since it has to do with the local level geometry. If your opponent has no escape routes, the shotgun is still excellent in close quarters. I haven't thought about this problem very much, but my intuition says that solving it in BrainWorks is well beyond the scope of the project. You might be able to analyze past reactions opponents had to your weapon choices, but it's not clear how good this data would be, how it would affect the bot's choices, and if this problem is even large enough that it needs such a sophisticated solution.

At any rate, I apologize if this post is a bit more technical than normal, but that's the nature of the work. I'm very interested to hear your ideas for how to tackle these problems. I plan on thinking through possible solutions over this upcoming week and writing the fixes next weekend. If you have thoughts on this, I'd love to hear them.

2 comments:

Anonymous said...

hi ted,
why did you choose not to seed the estimated fire rates so far? because their influence might be to strong?

Ted Vessenes said...

The primary reason was that I didn't think it would be necessary. I figured even 10 seconds of firing would be enough to get a good estimate, but I forgot about the bootstrapping problem.

The other reason was that I wasn't sure what a good seed estimate would be, although that's less of an issue. The best choice would be to seed with a 100% fire rate and if the bot uses the weapon too much, it will find out soon enough.