I've just released the latest update to BrainWorks implementing the fixes I talked about a few weeks ago. Basically there's a much more sophisticated algorithm for tracking and estimating how likely a bot is to shoot a weapon in a given situation. You can download them using the links on the right if you're interested in trying it out. Let me know what you think!
Related to this, I've been thinking about the question of when software is done. In some sense I still stand by my answer in The Attitude of the Knife, which is "when you say it is". The flip side is that as long as you still have ideas and commitment, there's always room for continual improvement. For a simple program meant to meet a specific purpose, such as reading your email, there comes a point where there isn't much more room for feature development. Artificial intelligence isn't like that though. To this day, researchers are still trying to figure out how different areas of the human brain work. And contrary to popular opinion, humans are not the end of evolution. The human brain itself continues to refine and advance itself through generations.
I believe the line dividing things that can be finished and things that cannot is the line of self reference. When the problem is best solved by something that can analyze and correct its own mistakes, a whole new field of issues apply. For an in depth explanation of why this is, I highly recommend the second of three books that influenced my mental framework for understanding the world. That book is Godel, Escher, Bach: The Eternal Golden Braid, and it is about the nature of intelligence. Very roughly paraphrased, the book talks about the mathematical theorem known as Godel's Incompleteness Theorem, which says that any system capable of describing itself can describe statements that are true but unprovable. Originally the theorem was discovered in attempts to work out some issues in Principia Mathematica, an attempt to derive all mathematical truths from first principles. However, the incompleteness theorem sheds unanticipated insights on the areas of philosophy and by extension the nature of thought and intelligence.
Viewed in the context of intelligence, you could conclude that there are things which an intelligent person would do, but there is no describable algorithm that could conclude what those things are. Perhaps this represents acts of creativity, intuition and insight. Or perhaps those things are describable, but other things are not.
Applied to Artificial Intelligence, it means that there are some aspects of AI that we cannot solve; we can only approximate. And there's always room for better approximations. This is the real that AI development can last forever. You're never really done. At least not until you say you are.