Monday, April 14, 2008

BrainWorks 1.0.1 Released

I've just released the latest update to BrainWorks implementing the fixes I talked about a few weeks ago. Basically there's a much more sophisticated algorithm for tracking and estimating how likely a bot is to shoot a weapon in a given situation. You can download them using the links on the right if you're interested in trying it out. Let me know what you think!

Related to this, I've been thinking about the question of when software is done. In some sense I still stand by my answer in The Attitude of the Knife, which is "when you say it is". The flip side is that as long as you still have ideas and commitment, there's always room for continual improvement. For a simple program meant to meet a specific purpose, such as reading your email, there comes a point where there isn't much more room for feature development. Artificial intelligence isn't like that though. To this day, researchers are still trying to figure out how different areas of the human brain work. And contrary to popular opinion, humans are not the end of evolution. The human brain itself continues to refine and advance itself through generations.

I believe the line dividing things that can be finished and things that cannot is the line of self reference. When the problem is best solved by something that can analyze and correct its own mistakes, a whole new field of issues apply. For an in depth explanation of why this is, I highly recommend the second of three books that influenced my mental framework for understanding the world. That book is Godel, Escher, Bach: The Eternal Golden Braid, and it is about the nature of intelligence. Very roughly paraphrased, the book talks about the mathematical theorem known as Godel's Incompleteness Theorem, which says that any system capable of describing itself can describe statements that are true but unprovable. Originally the theorem was discovered in attempts to work out some issues in Principia Mathematica, an attempt to derive all mathematical truths from first principles. However, the incompleteness theorem sheds unanticipated insights on the areas of philosophy and by extension the nature of thought and intelligence.

Viewed in the context of intelligence, you could conclude that there are things which an intelligent person would do, but there is no describable algorithm that could conclude what those things are. Perhaps this represents acts of creativity, intuition and insight. Or perhaps those things are describable, but other things are not.

Applied to Artificial Intelligence, it means that there are some aspects of AI that we cannot solve; we can only approximate. And there's always room for better approximations. This is the real that AI development can last forever. You're never really done. At least not until you say you are.

3 comments:

Anonymous said...

Funny I'm reading this only now, but I twittered something along these lines this morning:

"Art is never finished, only abandoned." --Leonardo de Vinci

Code seems like the same deal to me!

Alex

Anonymous said...

I saw a nightmare bot get into a shotgun battle with another bot at a distance of about 2 lightning gun reaches, so it barely hit with any bullets. At that range, a player would either choose to close in on their target, or just completely ignore the target.

The most reasonable weapon system I can think of is something like this, I'm just making up the range numbers and falloff numbers for general effect.

Accuracy Modifier

Base priority is how much default weight the weapon has
Optimal Range is the preferred distance to target
Priority falloff rate is the rate at which priority falls as the distance from the priority range increases in either direction
Accuracy modifier is a general modifier to priority based on how the bot feels its accuracy is. A bot with low accuracy would be more inclined to use a rocket launcher and less inclined to use a rail gun.

You could multiply the final priority levels by individual bots' preferences for weapons, and only switch weapons if another weapon is at least 10% higher, to prevent a situation with rapid switching.

Gauntlet:
Base priority : 7
Optimal Range : 0
Priority Falloff rate : 14
Accuracy Modifier: None

Machine Gun
Base priority : 7
Optimal Range : 350
Priority Falloff rate : 3
Accuracy Modifier: High

Shotgun
Base priority : 9
Optimal Range : 125
Priority Falloff rate : 6
Accuracy Modifier : Medium

Grenade Launcher
Base priority : 9
Optimal Range : 200
Priority Falloff rate : 6
Accuracy Modifier: very high

Rocket Launcher
Base priority : 10
Optimal Range : 300
Priority Falloff rate : 2
Accuracy Modifier : Low

Lightning Gun
Base priority : 14
Optimal Range : 150
Priority Falloff rate : 10
Accuracy Modifier: Medium-high

Rail Gun
Base priority : 11
Optimal Range : 750
Priority Falloff rate : 2
Accuracy Modifier: very high

Plasma Gun
Base priority : 12
Optimal Range : 200
Priority Falloff rate : 5
Accuracy Modifier : Medium

BFG10k
Base priority : 20
Optimal Range : 275
Priority Falloff rate : 1
Accuracy Modifier : Low

Anonymous said...

err the rocket launcher should have 150 ideal range, and a slightly greater accuracy falloff.