There's not much difference between driving 55 and 56. Unless of course the speed limit is 55, in which case that small amount of speed could make you get a speeding ticket. Standard outside temperatures average between 0 and 100 degrees Fahrenheit, but people easily notice the difference between 68 and 72 degrees. And yet for other measurements in our lives, small changes are completely unnoticed. A car with a full gas tank drives just as well as a car with a quarter tank left-- you only notice the problem when the last drop of gas is gone.
When you design artificial intelligence, everything needs to be broken down to numbers, since a computer can only understand numbers. And a lot of times you have no idea what numbers best simulate the behavior you want to elicit from the AI, or even if the number encodes the correct concept.
Think of the problem this way. Suppose I have a value that encodes the maximum error a bot can make while attempting to aim. If the AI doesn't behavior properly with the initial value I selected, I'll want to try other values. By trying different values, I'll rapidly find out whether this number is very sensitive to changes (like temperature) or insensitive (like a gas tank). But if the bot doesn't seem to be doing the right thing, there's still no way to know what the right value is.
A sensitive number is extremely hard to tweak, because you won't see the right behavior until you get things "just right". If the value doesn't encode the right concept, then that "just right" state won't exist, but you'll never know that. You'll just see how all kinds of different values don't work in different ways.
And an insensitive number generally just has an impact when it crosses an important boundary (for example, driving 56 instead of 55, or your gas tank being 0% full instead of 1% full). There's often no indication where this interesting numerical boundary might be.
Additionally, values can depend on each other. Perhaps changing this maximum error value makes some other value be sensitive, when before that value was insensitive (and thus not aggressively tweaked).
All this to say that when you write AI, there's an awful lot of number tweaking going on. Either you do it by simulations, by neural network training, genetic algorithms, or by hand. But one way or another, you'll have dozens of numbers that all need to be "just right" to present the illusion of intelligence. The good news is that once you've spend time finding these values, you'll have excellent insight into how humans approach the problem. That's how research works, I guess.
For example, do you know what value is the single biggest factor in determining a player's aiming accuracy? It's how fast the player moves the mouse, not the error in mouse movement. The most important factor is the maximum allowed acceleration of the bot's simulated mouse. I know this because of all the values that go into aiming (and there's close to a dozen of them), the acceleration factor by far has the largest impact on actual bot accuracy. There are some very good reasons why that is the case, which are a bit to complicated to explain right now. But simply change the the maximum acceleration for high skill bots from 1400 to 1600 and you'll see an immediate increase in actual bot performance. This value was one of the last values I decreased before release, actually, so BrainWorks bots didn't have the same ungodly aim that traditional Quake 3 bots have.
At any rate, getting artificial intelligence to feel "just right" really comes down to finding the appropriate things to measure and the values those measurements should have. There's no right way to do this, and certainly no magic solution. BrainWorks uses statistical modeling for some things (weapon accuracy) and fixed numbers derived from simulations for others (aiming). Seeing how much the final result can change from just minor alterations in one number certainly gives an appreciation for the complexity that goes into genuine intelligence. If changing just one of twelve values has a dramatic impact on how skilled a bot appears, try to imagine the complexity of millions of neurons in your brain. And yet they all operate together in a (mostly) cohesive unit. It boggles the mind.