I read an exceptionally good essay by Paul Graham entitled, "Lies We Tell Kids". It talks about white lies, the kind you tell to protect people rather than manipulate them. It's worth reading in full, but let me summarize what I took out of it:
People usually lie because the listener cannot emotionally or mentally handle the truth. While this is often the best thing for the listener at present, it has a cost: They do not understand reality as it truly is. They are unprepared to handle that truth when they encounter it in another fashion.
As I wrote earlier, I see strong parallels between designing AI and being a parent. There's a lot to be said for the "lies" that we as AI designers tell our AI: "No really, you can have perfect information about the entire world. And if you have to spend a long time thinking, that's okay. The entire world will stop and wait for you to decide." Whenever we let our AI cheat at the game, look up information it shouldn't know, or otherwise do things a player can't do, we are in some sense lying to our AI about what reality is like. And the price we pay is this:
We live in fear that our AI will encounter a game situation where the player will know the AI has cheated.
I spend a lot of time talking about good AI and bad AI, and why it's often worth the time to write good AI. But I'm not going to say we shouldn't lie to our bots! I recognize that "bad AI" has a place in AI design. Sometimes a corner must be cut because a problem is simply too difficult. And often these are not isolated exceptions. It's possible to write enjoyable AI that cuts an awful lot of corners as long as you know the right corners to cut.
For example, a bot can get away with almost anything if the player can't see the bot. The amount of cheating a bot can get away with dramatically decreases once the player actually watches the bot, in the same way that a magician's tricks only work when you watch the hand he wands you to watch.
The real key to writing good AI is learning which corners to cut and which problems must be explicitly solved in a human-like manner. In other words, it's about determining when to write bad AI and when to write good AI. In BrainWorks, I certainly erred on the side of too much good AI, but I think this is still a long term benefit. Once someone has solved the problem (and released the source code), everyone can reap the benefits and we can all move onto harder problems. But actually writing artificial intelligence that doesn't cheat about anything is an excruciatingly difficult problem. An AI that does everything like a human does would appear to observers to be a human. It could pass a Turing Test, the holy grail of Artificial Intelligence research.
When I say "Holy Grail", I mean it in more ways than one. Even the most optimistic estimates on when we'll be able to create a truly artificial mind say it won't be done for decades. My personal estimates are that it will be done in 300 to 1000 years. And some people claim that it is literally impossible to create an artificial mind, that it so complex that it can only be evolved or created by God (depending on your philosophical preference).
If you want an example of what roughly 700 years of research will give you, the atomic bomb was created only 500 years after the birth of Leonardo da Vinci. Think about how much science advanced in those 500 years. While that time was centuries long, it was punctuated by thousands of little scientific discoveries about how the universe worked. Einstein couldn't have had his flash of insight about how Newton was slightly wrong if Newton hadn't done work to come up with equations that were 99.9999% right.
I believe the next 700 years of AI Research (give or take a little) will be similarly punctuated. Humanity's journey to the creation of an artificial mind will be punctuated by little advancements. No one person could do all the work necessary to design everything needed for a true artificial mind, but every time an AI designer chooses to do a bit of "good AI" rather than cutting corners with "bad AI", we get one step closer to the final goal, even though the destination is light years away. I will not fault anyone for writing AI that cheats; I've done the same. But writing AI that cheats even a little less than usual is worthy of high praise.
Monday, May 26, 2008
Subscribe to:
Post Comments (Atom)
2 comments:
Ray Kurzweil has the convergence of technology and the human mind happening more along the lines of th 2020s. And he's been right before. Often.
I'm aware of his prediction, but still very skeptical. Marvin Minsky has been right a lot too and his estimates aren't as generous.
As it stands, there still needs to be at least one more major breakthrough in natural language processing. Every generation of AI researchers seems to claim they'll have that problem solved in the next 10 to 15 years, but if you look at the actual progress made over the past 60 years, it's been steady and still extremely slow.
I think that by the end of my life, I won't see true AI, but I'll have a very good idea of when it will come into being.
Post a Comment