Showing posts with label pragmatism. Show all posts
Showing posts with label pragmatism. Show all posts

Monday, October 19, 2009

We Can Worry About That Later

As I've been in the middle of purchasing a house, my life has been extremely hectic. The past several weeks have been filled with housing inspections, mortgage application, reviewing and signing legal documents. And naturally every detail has someone who wants to renegotiate it. After closing, we'll still have painting, moving, decoration, and furniture purchases to handle. My life is really stressful.

A few days ago my wife and I were reviewing the upcoming task list and for one item I mentioned, "we can worry about that later". I then realized I use that phrase an awful lot, and to me it means "we can handle that later". In other words, to me worrying is synonymous with work. Put negatively, I don't stop worrying until the work has been completed.

It was an excellent moment of self reflection. I simultaneously realized why I'm so driven, intense, productive, and stressed. This life attitude has its benefits, but it's certainly not healthy in the long term. A few weeks ago I wrote about how the secret of a successful marriage is reducing stress. Perhaps it's better to say:

The secret of happiness is reducing stress.

After all, what else is stress but discontent about the possible future? It's a tricky balance though. If you live life in the future, you'll solve all these potential problems but always be tense as a result, never enjoying the present. If you live in the present you'll enjoy it only until something happens that you should have dealt with. How do you focus on the present while building your future? Personally, I feel like I don't balance these constraints very well.

So I've been thinking about different ways to manage stress. Some common things people try include:
  • Eat and/or drink
  • Have Sex
  • Exercise or spend time outside
  • Sleep or practice deep breathing
  • Read a book, watch a movie, or play a game
  • Daydream or imagine good things
  • Procrastinate by doing less important work
  • Remove the source of stress
Personally I spend a bit too much time on the last two items. Classifying these options, they seem to fall into one of three categories:
  • Solve the issue
  • Ignore the issue
  • Accept the issue
Unfortunately if the only mechanism for relieving stress is to solve the problem, you're in for a rough life-- there's always something else you can worry about, and many things you can't fix Ignoring issues seems fine for small problems. And acceptance is the only option available for problems too large to be solved or ignored.

I believe the real secret to happiness is properly identifying which problems should be accepted and which should be solved. And then realizing that most problems are of the former type. It's easy to get caught up in trying to fix everything, especially as a perfectionist. But the more you genuinely accept misfortunes as Not A Big Deal, the more you can enjoy the truly good things in your life.

If that's true, then the real secret to happiness is forgiveness.

Monday, October 5, 2009

Squeezing The Margins

My father has been a business consultant for decades, my mother is a certified financial planner, and my brother has run several small companies. So as you'd expect, my family talks a lot about money-- both now and when I grew up. Much of the discussion is about how to make the most money.

For example, if you have a savings account that earns 3% a year and have a credit card balance that costs you 10%, you have no reason to keep anything in savings until that credit card is paid off. Not paying off the credit card costs you 10%, bringing the net return on savings to 3% - 10% = -7%, or a 7% loss. The 3% return isn't the full story on investment. You also lose 10% in "opportunity cost", since saving the money means you forgo the opportunity of paying down debt.

On the other hand, if you have some subsidized government school loan with a 4% interest rate and you are earning 5% on average from savings, you will make more money by paying the loan off as slowly as possible, putting all additional money into in savings. When you put $100 into savings rather than paying this loan, you owe 4% more in loan interest ($4) but earn 5% more in savings interest ($5). This is a net gain of 1% ($1).

I've heard all kinds of other, more complicated ideas to leverage money for potentially greater returns. All the ideas come down to this basic concept though:

Maximize the spread between your expected investment interest rate and your debt interest rate.

If you could get a loan with a 7% rate that funds a business with an expected return of 8%, then that's 1% better than not doing anything at all. In theory.

The problem is that more complicated investment strategies have a greater chance of going wrong. There's simply more potential source of failure. In the real world there is no such thing as a guaranteed 8% return on investment. More typically what happens is that there's a pretty high chance that there will be between say, 6% and 10%. And there's a very low chance you'll get a 0% return or even less. So maybe you'll get lucky and get a free 3%. But another possibility is that the investment will give 6%. The business would consider this a success, but it cost you 7% to make this investment, so you're down 1% for your effort. And there's always the chance of a catastrophic failure, meaning you lose most or all of your investment.

Lately I've been rethinking this strategy of squeezing the margins. The increased risk is certainly part of it. Mathematically you will maximize your money if you go from a guaranteed $1,000 to anywhere between $900 and $1,400 most of the time and occasionally $0. But maybe maximizing your money isn't the end goal of life. The potential life impact of losing between $100 and $1000 is probably a whole lot worse than the best case scenario of gaining $400.

Additionally though, more complicated investments just take more time to manage and stay on top of. The purpose of money, at least in my life, is to give me more time and reduce my stress. Working hard for an extra 1% isn't worth it if I can't turn that money back into the time I spent to get it. Squeezing the margins isn't the key to financial security, though it does maximize your odds of getting lucky and striking it rich. Here's the strategy I use:

Reduce, Reuse, Recycle

This is the recycling motto. Most people don't know this, but these steps are ordered by importance. In other words, the most important thing you can do is reduce the amount of things you obtain that need to be recycled. The second most important thing is to reuse the things you have. And if can neither do without them nor reuse them, then as the last option you should recycle them. Sadly most people these days focus on "recycle" and ignore the other two.

This mindset applies just as well to finances though, and with the same priority scheme:
  • Reduce: Don't buy things you don't need
  • Reuse: Don't have money in so many places you can't keep track of all of it
  • Recycle: Put your money in the place it gives the highest return
That third rule is the same one as "squeezing the margins". Squeezing gets you into trouble when it violates the second rule-- once it takes a lot of effort to manage your investments, you are setting yourself up for a fall. But most importantly, the secret of wealth is not buying things you don't need.

Monday, September 7, 2009

Being Mr. Right

I've been happily married for the past nine years, and we dated over four years before that. Like everyone, we've had ups and downs, but the experience has been overwhelming positive. We have very much grown together. We complement each other well. Not everyone has the same experience, however. I'm conscious that our happy marriage is in large part because of the effort we've put into it, not because "we're so totally in love" or "we're perfect for each other". Those things are true, and they're required for a successful marriage. But neither being in love nor being compatible are sufficient. Contrary to popular opinion, love isn't all you need. If you want your marriage to be a success, or any relationship for that matter, it helps to understand the purpose of that relationship.

When we were teenagers, my brother said something really insightful to me:
Americans don't get divorced when they fall out of love. They get divorced when it's less trouble than staying married.
His point was that many people stay in unhappy marriages because to them, it's better than being alone. But ultimately he's hinting at a rather pragmatic view of marriage. There's so much description of marriage in terms of love and everlasting commitment, but I think that glosses over this simple fact:

People get and stay married because it improves their life.

Simply stated, people think they are happier married than single. The list of common causes for divorce looks strikingly like common causes for depression:
  • Financial trouble
  • Child raising issues
  • Sexual incompatibilities
  • Infidelity
  • Lack of Communication
  • Physical or Mental Abuse
  • Addictions
  • Lack of Compatibility
Really, all these issues boil down to one root cause:
  • Stress
Either the couple disagrees about how to handle an issue and knows it (a disagreement) or they don't know they've disagreed (a miscommunication). Suppose one person wants to spend money on food, housing, and a fancy car. And the other person wants to spend it on food, housing, and travel. If they buy all four things and get into debt, they'll end up in a situation where they can buy neither fancy cars nor nice vacations. Financial stress will tear a marriage apart.

But so can any stress. When parents disagree about how children should be raised, they are likely to blame problems the children create on the other spouse's decisions. If one spouse wants sex once a day and the other wants it once a month, then at least one person will be unhappy, but probably both.

The secret to a successful relationship is using it to reduce life stress rather than create it.

That's why love is necessary but insufficient. Love is merely the motivation that makes you decide the other person is worth the effort. The secret ingredients are solid communication skills and a willingness to compromise.

Good communication prevents small problems from becoming large problems. For example, suppose one partner says, "Lets talk about that later" whenever she's feeling a bit overwhelmed with an issue and needs some time to process alone. But her husband interprets that as, "I don't want to talk about this at all." He might end up feeling emotionally shut out by her. And she might feel neglected because he never initiates the conversation at a later date. This problem is completely avoidable as long as both people take the time to express how they interpret what the other person says and does. The misinterpretation would be immediately clear. Without taking time for that though, these irritations build up into years of needless emotional pain.

Of course, sometimes both people understand each other perfectly well but disagree about what they want to do. In the case of a husband who wants to buy a fancy car and a wife who wants to travel abroad, trying to do both will put them in financial trouble, so that's not an option. Sometimes they just have to compromise on what they want for the sake of the other person. Maybe that means buying a Nissan instead of a BMW, and that they travel to Miami instead of Paris. Or maybe it means they travel to Paris this year, but they buy a BMW in two years. Both people need to realize that their needs can't be the first priority 100% of the time, nor can the same be true of the other person. The overall happiness of the couple needs to be more important than any one particular desire.

There are also situations where couples understand their differences of opinion and are unwilling to compromise. Abusive relationships fall under this category. When the husband believes it's okay to beat up his wife and the wife disagrees, they shouldn't compromise by saying it's only okay to beat her on certain days of the week. If you can't agree to disagree, that's a sign the relationship needs to end. No amount of love or compromise will stop an abusive spouse or convince someone to quit their drugs. They've declared what they want out of life, and it's simply not compatible with what you want.

This is really the category of "irreconcilable differences", and applies to innocuous things as well. For example, desired frequency of sex. If one person is unwilling to have sex more than once a month and the other person wants sex daily, there aren't a lot of options. One person could start having an affair, they can agree to an open relationship (essentially a sanctioned affair), they can "compromise" by only having sex monthly and building long term resentment, or they can break up. Breaking up seems like it causes the least emotional pain in the long run in most cases, which means it's often the best choice. There's no shame in breaking up when you realize things won't work out. You just weren't the right people for each other.

Monday, July 27, 2009

Only When It's Funny

As I believe the purpose of life is to be enjoyed, making people laugh to be one of the most important things I do. That's ironic, given that this blog is particularly unhumorous. Don't worry thought; that's not a mistake I intend to correct today. Rather, I want to share a bit more about humor and what makes things funny.

When I was a kid, I saw the movie Who Framed Roger Rabbit. While it's full of cliche and loaded with cheap slapstick gags, it also includes a surprising amount of sophisticated humor and wit. I'll always remember one line from the movie though. Roger Rabbit is wanted for murder and trying to avoid the police. He inadvertently handcuffs himself to detective Eddie Valiant. The two finally make their way to a safe location where Eddie gets his hands on a hacksaw to remove the cuffs. If you want to watch how the scene unfolds, here it is.

Roger just slips his hand out of the cuffs and asks if that helps. Eddie is clearly pissed and asks if Roger could have done that at any time. Roger responds:

No, not at any time. Only when it was funny!

I love the implication that the toons don't have complete and total power over reality. They can only break the laws of physics when its funny to do so. For example, Wile E. Coyote is only allowed to run off a cliff and walk on air when he doesn't notice he's not standing on solid ground anymore. Once he realizes that he's supposed to fall, he falls.

Furthermore, Roger's response speaks to the value of comedic timing. Things are funny in large part because of their context. If you watch any professional comedian, you'll see a clear difference between waiting one and two seconds before delivering the next line. Some responses would simply be less funny if the response was one second sooner. John Stewart of Daily Show fame is a particularly good example of this. Comedians with perfect timing learn just how long they need to wait before their audiance comes to a certain mental conclusion. The humor derives from constrasting the comic's next statement with your current thought, and if the line is delivered too soon (or too late), it doesn't provide the right contrast.

Of course, knowing that timing matters isn't the same as knowing what to say or when to say it. No two people laugh at the same things either, so humor is a very personal thing. What makes something funny anyway?

I believe all humor reduces to cognetive dissonance between expectations and reality. When you expect life to be one way and it's different, your mind has a few possible responses. You either get offended or you laugh about it. Which response you have depends on how much you care about the topic. For example, suppose you mention that you've been feeling a bit sick for the past few days and your friend deadpans, "It's probably swine flu." Whether you chuckle depends on whether you actually think you have swine flu.

Not to be pedantic, but it's worth analysing this joke in detail. The crux is that people overestimate the probability and danger of catching swine flu. Your friend is pretending to be one of these people who overreacts, pointing out the reality of these people. Rationally we know that the odds of actually catching it are extremely low-- more people die of the regular flu than swine flu. So in an ideal world, people wouldn't be worried about it, and that's the source of cognitive dissonence. The joke boils down to, "people are worried about swine flu but shouldn't be."

If you agree with that statement, you'll laugh with your friend. But if you think people aren't overestimating the deadliness of an epidemic which has claimed far fewer lives than car accidents have in the past six months, you'll be offended. Your friend's joke became a criticism of your own perspective.

And that's the beauty of comedy. Laughter is a reflection of what we consider unimportant, and the vast majority of our lives really don't matter. A 14 year old girl might be mortified for farting in class, but she would be a lot happier if she could laugh like the rest of her classmates. After all, no one cares as much as she thinks they do. If you want to be happy in life, you have to be humble enough to accept that you just aren't that important. Only then can you laugh at just how crazy and awesome this world really is.

Monday, June 29, 2009

Optional Law

Imagine there was a law that required everyone to pay a $100 tax each year. However, there is no way to find out who paid their tax and who didn't. All the government knows is the total amount of money that comes in and therefore the percent of people paying the tax. Is avoiding this tax ethically justifiable?

I don't mean to suggest that actions are only unethical if you get caught; I don't believe that is the case. When 10 people commit similar crimes and only 9 get caught, that doesn't make it right for the guy who got away. Rather I'm asking whether it's ethical to break a completely unenforceable law. I'm not sure, and perhaps it depends on how heinous the forbidden action is. But our perceptions of right and wrong change over time.

As a concrete example, consider the anti-sodomy laws that many American states had before they were struck down as unconstitutional. These laws restricted the sexual conduct of consenting adults and were virtually impossible to enforce. In the centuries since these laws were first passed, the majority public opinion no longer regards oral and anal sex as immoral.

The general argument against unenforceable laws is that they only punish people honest enough to follow them. In the tax example, both honest and dishonest people would get the benefit of however the nation spent the tax money, but only the honest people paid for it. This is not fair, though even unfairness does not necessarily make ignoring the law ethical. The argument also gets murkier when the law is more than 0% unenforceable, and when the prohibited action has a negative impact on society as a whole. Murder is wrong even if it could never be enforced, for example.

A more recent example is the set of laws against unauthorized music downloads. Music companies don't want people using the things they produce without paying for them, but modern technology has made it very easy for people to do this anyway. The basic structure of the problem looks like this:
  • Party A (the music company) owns the rights to some information.
  • They allow party B (the purchaser) to use this information, but not to distribute it.
  • However, they cannot prevent party B from giving it to party C (the downloader).
At first, music companies tried to stop downloaders, but it's really too difficult to figure out who they are. So the latest tactic has been going after purchasers who distribute the music online. Even that has been very difficult, often requiring subpoenas to ISPs, and these tactics have not substantially increased music company profits or decreased illegal downloading.

The question is whether it's ethical to download music you haven't purchased. There is not much benefit to society from unauthorized downloading, but there is also not much cost. It's true that record companies lose some money from people who would otherwise have paid, but not every downloader was a potential customer. Plus the increased exposure from free downloads has in some cases turned into more paying customers, though I suspect the net effect is still a loss most of the time.

At any rate though, the real problem is that music companies are distributors who can no longer control their product distribution. No amount of legislation can make this business model profitable again; they need to find a new way to add value to customers before customers will give them money again. I'm undecided about whether illegal downloads are ethical. That said, music companies need to suck up the fact that it will happen anyway and stop trying to fix unsolvable problems.

Of course, it's easy for consumers to tell that to music companies, but harder to take that advice yourself. Here's the grand irony: The same people who support legalization of free online music are generally opposed to unauthorized government surveillance. When it's someone else taking their own words without permission, they strangely don't see freedom of information in the same light.

But the government's wiretapping program has clear parallels with music downloads:
  • Party A (music companies or a citizen on a phone) is the owner of information.
  • Party B (music purchasers or the telephone company) was given this information with the understanding it would not be given to people that A did not authorize.
  • Party C (music downloaders or the government) was given the information by party B anyway.
Similarly, it's not clear what the costs and benefits to society are. The government might be making the country more secure, or maybe the information doesn't help. No one wants to have their privacy invaded, so clearly there is a cost, but I'm not convinced the cost is freedom. Using the information as a way to ferret out "unpatriotic citizens" isn't that practical given the volumes of data that need to be analyzed by hand. Government agencies only have so much money to spend, and the biggest threats to national security are not unpatriotic ideas-- they are crazy people with bombs.

The biggest societal danger is of high level politicians trying to access information from specific domestic adversaries and using it for personal gain, but you don't need a massive government program to cull information on just a few thousand civilians. While this danger is real, I suspect it existed well before the Bush administration set up this widespread surveillance program.

So just as I haven't made up my mind about music downloads, I also can't decide on warrantless surveillance of citizens. They seem similar enough that they should both be moral or both be immoral. It's possible to argue for one and not the other, but that's a difficult argument to make; I'm not sure what that argument would be.

But there's one thing I am certain of:

Like the music companies, we should worry about solvable problems, not unsolvable ones.

No amount of legal pressure will have any impact on warrantless wiretapping. If you really care that the government might find out that you're meeting friends for drinks, then don't tell people that over the phone. Certainly you should stop posting it on Twitter. Or alternatively, learn to care just a little less about privacy. In the grand scheme of things, none of us are really that important, so you might as well be happy instead.

Monday, March 23, 2009

Best of Both Worlds

A while ago I spoke with the head of a computer game development company. He explained to me that from an economics perspective, it's really important to keep the perspective on the player as a client of your company:
Imagine a graph of the player's enjoyment over time. The ideal arcade game starts the player with an amazing level of enjoyment, like eating chocolate cake with naked supermodels. Then after a minute or so, the enjoyment drops to zero so they put in another quarter.

Retail games are totally different. You only get their $50 once, meaning you want them playing the game as long as possible. This is so they'll recommend the game to friends. Having an absolutely amazing first minute isn't as important as long term playability and the general amount of content.
His comments stuck with me. A game designer needs to understand why players are supposed to enjoy the game, and whether it's more important to spend resources in one area or another. An arcade style game should focus on eye-candy and general visual "bling", including modern pay-as-you-go online games. The latest flavor of PC and console games only need enough eye-candy to initially attract people, and then get more mileage from more content. No one is happy with a pretty game that's finished in 3 hours.

He implicitly made one other point. Games are a tool for invoking emotional responses. He measured enjoyment, but games have the power to make us laugh and cry. They can frustrate us and make us rejoice. They can make us feel fearful and triumphant. It takes a truly brilliant game to do all of these things.

For me, that game is Thief: The Dark Project, from Looking Glass Studios. And by Thief, I also include the amazing sequel, Thief 2: The Metal Age. If you are unfamiliar with the game, I recommend watching Yahtzee's review at Zero Punctuation. It's delivered in his usual style of sarcasm and obscenities, but it's a very good review.

To summarize, Thief is a game that's the categorical opposite of a standard shooter. Rather than being armed with a dozen different firearms and enough ammunition to kill an entire brigade of bad guys, you play the role of Garrett, the master thief. Garrett is so weak that he's likely to die if he's involved in a fair fight with even one guard. Your whole job is to make sure that you never have to fight fair, ideally by avoiding fights altogether. The missions generally involve breaking into and out of heavily armed facilities, with every level providing a multitude of ways to approach it.

For example, to break into a manor, you might climb up onto the second floor and break a window to get in, sneak around to the back door and pick the lock, or knock out man guarding the front door. Each option has different challenges and benefits. Breaking a window is noisy and sure to attract guards to investigate, but being on the second floor means you're much closer to wherever the lord is keeping his valuables.

As a character, Garrett is a typical anti-hero, something I found refreshingly more believable than the standard do-gooder hero of today's games. Garrett is self-centered, distrusting, arrogant, and apathetic. He just wants to steal enough valuables to retire in style. But he's also extremely clever, and as a character, he is manipulated into both putting the world in danger and saving it through his natural reactions. Garrett only wants to save the world because it implicitly saves his own skin, and he'd rather not put himself in that much danger. But he doesn't have a choice, and the puppeteers in the game know this and take advantage of him because of it. To me, that's a much more believable character than an obscenely powerful special ops soldier who fights evil terrorists Just Because He's That Nice Of A Guy! I won't spoil the story because even after 10 years, it's still excellent.

I found the gameplay intense because it was the first game where I felt genuine fear. I'm not talking about the kind of frightening situations they'd put in a game like Resident Evil, where a zombie pops up out of nowhere and charges at you. That contains all the subtlety of a carnival funhouse. There's a difference between frightening and feeling fear. True fear comes from an impending sense of dread and worry, and that's something a zombie surprise cannot deliver. In Thief, however, you spend the entire game as a weakling. You know that if you make a single mistake, a whole slew of guards can appear and bring a world of hurt. So the entire game is spent wondering if you are hiding in a dark enough shadow, or if you can find a nearby window to jump through if things go downhill.

I'm so impressed that a game could do such a great job of bringing out a variety of emotions. In the space of one minute, you can go from suspense to fear to terror, and then feel extremely pleased with yourself by barely escaping death and finding a well hidden piece of treasure. And never have I played a game where the main character was so despicable, and yet I found myself liking him anyway.

I know that games should either be a lot of fun for a little bit of time, or a little fun for a long time. But somehow Thief manages to be the best of both worlds. The gameplay is always intense, and the story is so good that it hasn't gotten old in the half dozen times I've replayed the game. Somehow, this game embodies the best of both worlds. And while Looking Glass Studios is no more, I hope that someday someone makes another game as good as Thief.

Monday, January 26, 2009

I Like This Duke

As I've mentioned before, the book Dune shaped much of my adult thinking. In Dune, Liet Kynes meets the Duke Leto Atredies to give him a tour of Arakis, the planet the emperor has given to Duke Leto. Kynes is a servant of the emperor as well as imperial planetologist-- think head of the EPA on a galactic level. Kynes is also a member of the Arakeen native population, and Duke Leto has come to the planet to mine it for spice, an obscenely valuable commodity. So Kynes has good reason to hate Duke Leto. Leto is consuming resources from Kynes' home planet, undermining Kyne's task of protecting and understanded the planet. Leto is an external force that could disrupt the planet's entire ecosystem and destroy Liet Kynes' people.

Yet the second time Kynes meets the Duke Leto Atredies, Kynes changes his opinion on the duke. The two go out to survey a spice mining operation. Something goes wrong with the harvester which has almost a full load of spice. The load was worth rough one million times a worker's lifetime wage. Yet rather than worry about the spice, Leto goes through extraordinary means to save every last person working in the harvester. He shrugged off the value of the spice without a second thought, arguing that they could always get more later. In the words of Liet Kynes,

"This Duke is more concerned over his men than the spice! I must admit, against all better judgement, I like this Duke."

My interest in politics stems from the fascinating drama of the story. And politically, I'm very jaded. I believe that almost invariably, politicians act in their own best interests. They only helping the populous out of self-motivation. For example, senators get additional funds for their state so that they will be re-elected, and they try to get elected so they can acquire "campaign contributions" (essentially bribes) in return for passing laws that favor wealthy organizations. Politicians do both good and bad things with their power, but nothing is motivated out of selflessness. Even their good deeds have selfish motivations. So when politicians promise change and a renewed pledge to defeat corruption, I don't think anything of it.

I find myself in a similar position to Liet Kynes as I watch President Barack Obama. I'm not expecting a miracle from the man, and his promises of hope and change strike me as typical campaign messages from younger politicians. As a politician, Barack Obama is untrustworthy until proven otherwise, and even then he's still suspect. But against all my better judgment, I like this President. The feeling is extremely disturbing.

As I have very different expectations for President Barack Obama as other people have, I suspect I like him for very different reasons. I think a lot of people like Obama simply because he's not George Bush. Certainly many people like Obama because he's not white. Obama is empirical proof that minorities can earn just as much success as white people. He is also a symbol of a new generation, being the youngest president America has had in decades.

Symbols inspire, and living symbols have high expectations. But I'm unmoved by Barack Obama, the symbol. Rather, I am inspired by Barack Obama, the man. In many of the decisions he's made so far, I feel like he is paying well more than lip service to his promises. For example, his secretary of energy, Steven Chu, is a professor with a Nobel Prize in Physics. Who was the last secretary of energy that even had a PhD? I know the standard decision is to choose a board member of an oil company like Exxon and have them recommend policies that involve deregulating polution control and not investing in alternative energy. But actually chosing someone who is respected by the entire science community as a top member of his field? That's totally unheard of, and well beyond my low expectations of who would be selected for a political position.

Similarly, Obama's adamant stance on closing the Guantanamo Bay detention center is beyond the standard political posturing statements. Normally politicians will decry how human rights are being violated, but they won't actually do anything about it, or they risk losing votes from people who think that they can secure safety for the nation by torturing potential enemies. Obama seems very intent on doing something about the situation even if people disagree with him.

I believe George Bush initiated the two wars with Afghanistan-based terrorists and the country of Iraq out of vengeance. As a man, Bush views loyalty as something to be rewarded and dissent as something to be punished, and this is the extent of his motivations. Even though Bush claims he always "does what he feels is right", his definition of whether someone is right or wrong seems eerily correlated with whether or not they agree with his opinions. It's a bit of circular logic which always concludes, "I'm going to do what I'm want no matter what." The true test of whether you do what is right is how often you do things you don't want to do.

In contrast, Obama is very pragmatic man. He wants peace in Iraq because destabilizing the middle east makes life worse for America, not better. He wants to continue the fight in Afghanistan because the terrorists intend to strike America again. And he wants the US government to stop torturing prisoners for two main reasons. First, it makes many other nations hate America. And second, torture simply does not work. Studies have shown time and again that information extracted from torture is highly unreliable. Torture has no use as a tool of interrogation. It is only a tool of revenge, which is why it was used in the Bush administration and why Obama wants nothing to do with it.

I still think Obama is a politician, and his objective is to get reelected. But his apparent plan for reelection is to do as much good for America as possible, picking the most practical and pragmatic solutions rather than the solutions with the best image. If he wants to make America a better place for selfish reasons, that's fine by me. I hate to say it, but I like this president.

Monday, November 24, 2008

Boot Camp

An old friend of mine was a former army drill sergeant. I was surprised to learn this, as he didn't fit the stereotypical personality at all. Thoughtful and friendly are not the first words that come to mind when you hear the phrase, "Drill Instructor". This former job came up when someone mentioned they were leaving for basic training (aka. "Boot Camp") in six weeks, and my friend immediately told the guy, "Start doing push-ups now." Apparently you do so many push-ups in boot camp that it's never to early to get in shape, and the better physical shape you are in, the easier it goes. Not that boot camp is ever easy.

Intrigued by learning that my friend had been a drill sergeant, I asked him about basic training from the sergeant's perspective. According to him, the most important thing you can do to survive boot camp is to be practically invisible. You need to blend into the crowd and never even be noticed by the drill instructors. The instructor's job is to spot recruits that don't fix the mix and make them fit in.

"The instructors don't hate you," he said. "It's just that nothing you've learned in civilian life is of any use in the army, and they need to beat that crap out of you."

The typical civilian doesn't take order very well. Generally orders are considered "loose guidelines". They're likely to ignore orders outright, only do the parts they want, or do things differently based on their preferences. That kind of attitude is acceptable (although not optimal) in real world business settings. In the army though, the lives of thousands if not millions of people are at stake, and defying orders can have large consequences that the enlisted units can't possibly be aware of.

If your army company controls a bridge, but your officer tells you to march one mile down a river and swim across at midnight, then that's exactly what you do. The option of taking the bridge across and walking down the other shore shouldn't even enter your mind, even though your officer never told you why he gave you your orders. Soldiers who disobey orders either get killed or get other people killed. That's why the number one objective of a drill instructor is to teach privates to follow orders.

There's a similar issue for AI in team play video games. If your team contains some bots or possibly "side kicks", the game designers generally give you some way to give them orders. No game designer considers making your minions sometimes ignore orders, of course. Rather, the problem is this: Just as most people aren't used to following order precisely, they also aren't used to giving them precisely. There's a whole set of problems determining what the player even intends when they say something as simple as, "go back through the door". Which door? How far back? So much of language depends on context and it's just hard to determining meaning from a small snippet of words.

The second issue is that no matter what order a player gives, they almost never want the bot to perform that task indefinitely and precisely. When you say "go guard the red armor", it's just until they spot an enemy. When the bot spots an enemy, you don't want it to let them get away if that enemy leaves the red armor spot. The bot should stop guarding and start attacking in that case. Similarly, if no one goes by the red armor for a while, the bot should stop guarding and find something useful to do. When the player says, "Guard the red armor", the bot needs to understand that as "Guard the red armor until you see an enemy or until you've been there for three minutes".

This is the basic strategy BrainWorks uses for its message processing, which is largely unchanged from the original Quake 3 message code. It does the best job to determine what the player means and treats that instruction as a relatively important high level task. But it's still just a guideline. Stuff like combat takes precedence, and the bot will forget about the order after a few minutes. Ironically the bots need to function like civilians rather than soldiers, or the person playing the game wouldn't be happy with how the bots would follow orders.

Monday, November 17, 2008

Ignorance Is Bliss

Having written twice about the dangers of believing everything you are told, I'd like to give some face time to the opposing argument:

Ignorance is bliss.

It's all well and good to say, "You should understand things, not just follow blindly." But to be pragmatic, there is only one time this is a serious improvement: when the blind belief is wrong. What about the times that blind belief is right? Humans survived for centuries without understanding how gravity truly worked and that didn't stop them from creating some awesome things. They were even able to use the fact that "things fall towards the ground" to great effect, such as building water clocks, without ever learning Kepler's laws of motion.

Even if someone did want to totally understand the world today, it wouldn't be possible. There is such a vast corpus of information that learning even 1% of it is literally impossible in the span of a human life. Mathematicians who are experts in their field rarely have the chance to keep up on other branches of mathematics, to say nothing of physics, chemistry, or medicine.

The fact of the matter is that most of the information we've been told is correct. If I get a bus that says "Downtown" as its destination, it really is going there. The driver could take it anywhere but I'm very certain that the destination is downtown. When I order a meal at a restaurant, I take it for granted that the cook can actually make the things on the menu and that I'll be served food that's reasonable close to the description provided. The waitress serves me food assuming that I will pay the price listed. I suspect that less than 1 in 10,000 people really understands what a computer does when you turn it on, but hundreds of millions of people use a computer every day. Understanding is a luxury, not a necessity.

Like it or not, our lives as humans are anchored in faith, not reason and understanding, and this is the cornerstone of the religion well all share: Causality. If we do something ten times in a row and got the same result, we expect that the eleventh time will produce the same result. And it does, even though we rarely know why. Understanding everything is impossible, and the whole purpose of culture is to provide structure so that everyone can use the discoveries other people made.

If that's the case, why bother thinking about anything at all? Why not let someone else do your thinking for you? The primary purpose of education isn't to give people information, but to teach them how to think. And most importantly, to teach them when to think. Thinking is really important in uncharted waters. In any situation that doesn't match your typical life experience, thinking will give you a much better idea of what to do than trying something at random.

Unsurprisingly, the same problem comes up in artificial intelligence. There's only so much processor time to go around. So if seven bots all need the same piece of information and they will all come to roughly the same conclusion, it's much faster to do a rough computation once and let them all use those results. This leads to an information dichotomy, where there's general "AI Information" and "Bot specific information". Each bot has its own specific information, but shares from the same pool of general information. In BrainWorks, all bots share things like physics predictions and the basic estimation of an item's value. If a player has jumped off a ledge, there's no reason for every bot to figure out when that player will hit the bottom. It's much faster to work it out once (the first time a bot needs to know) and if another bot needs to know, it can share the result.

These are the kinds of optimizations that can create massive speed improvements, making an otherwise difficult computation fast. If you think about it, it's not that different from one person researching the effects of some new medicine and then sharing the results with the entire world.

Monday, July 28, 2008

Mission Impossible

One of the basic realities of life is that if something is easy, it's already been done. In the grand scheme of things, all unsolved problems are hard problems. While that might not have been true 300 years ago, the advancements of widespread education and information technology have made it easy to find potentially talented individuals and make sure everyone knows about their work. The result of that for almost every technological problem you can think of, either someone has solved it already or it's very difficult.

So what do you do when you need to do something that hasn't been done before and it turns out the problem is very hard? How do you break an impossible problem into something solvable? While there's no right or wrong way to go about this, there are three basic approaches you can use. Assume that the objective of a perfect solution is either practically or literally impossible, and you just want the best solution that's actually feasible.

#1: Improve on an existing base

The concept here is to focus on the small, incremental progress you can achieve rather than the large, difficult end objective. This is the approach I took when solving the general high level goal selection in BrainWorks. Bots must constantly decide what they are most interested in doing. Should they track down a target or look for more players? In capture the flag, should they defend their flag or get the opponent's? I confess I've added nothing revolutionary to this logic, and the correct choice is something even experienced human players have trouble making. Instead I just added a few pieces of additional logic to help the bots decide. BrainWorks bots will consider tracking down a target it's recently heard but not seen, for example. It's not algorithmically amazing but it's progress and that's what matters.

#2: Break the problem into pieces and only do some of them

I used this approach when designing the item pickup code. Roughly stated, the implemented algorithm is to estimate how many points the bot would get if it were to pick up different items and it selects the item that gives the most points. While the item selection is reasonably good, there's a lot of corners cut for the purpose of reducing complexity. How valuable is a personal teleporter anyway? BrainWorks just values it at 0.6 points and calls it a day, but clearly such an item is far more useful in some situations than others. And the estimations of where enemies are likely to be are based on proximity to items, not on actual geographic locations (like being in a specific room or a hallway). The estimates aren't remotely perfect, but they are still good enough for the purpose they serve: a rough estimate of what item the bot should pick up next. Designing this code wasn't even possible without ignoring all the truly hard parts of item pickup.

#3: Accept defeat and deal with the consequences

This is essentially The Attitude of the Knife. Sometimes things just don't work out and can't be fixed, and all you can do is accept that reality and handle the consequences as best you can. This was the approach I took for the navigation code in BrainWorks. As I've mentioned before, the navigation tables have huge issues on anything other than a flat, two dimensional map. While I'd love to solve this problem and I'm certain I could do it, I just didn't have enough time to tackle it. There were a few clearly bad navigation decisions (such as jumping between ledges) that I wrote code to manually fix, but those were essentially stopgap measures.

While it's not ideal, there's nothing wrong with accepting defeat against hard problems. After all, you aren't the only person in the world who couldn't solve the problem.

Monday, May 26, 2008

Little Lies

I read an exceptionally good essay by Paul Graham entitled, "Lies We Tell Kids". It talks about white lies, the kind you tell to protect people rather than manipulate them. It's worth reading in full, but let me summarize what I took out of it:

People usually lie because the listener cannot emotionally or mentally handle the truth. While this is often the best thing for the listener at present, it has a cost: They do not understand reality as it truly is. They are unprepared to handle that truth when they encounter it in another fashion.

As I wrote earlier, I see strong parallels between designing AI and being a parent. There's a lot to be said for the "lies" that we as AI designers tell our AI: "No really, you can have perfect information about the entire world. And if you have to spend a long time thinking, that's okay. The entire world will stop and wait for you to decide." Whenever we let our AI cheat at the game, look up information it shouldn't know, or otherwise do things a player can't do, we are in some sense lying to our AI about what reality is like. And the price we pay is this:

We live in fear that our AI will encounter a game situation where the player will know the AI has cheated.

I spend a lot of time talking about good AI and bad AI, and why it's often worth the time to write good AI. But I'm not going to say we shouldn't lie to our bots! I recognize that "bad AI" has a place in AI design. Sometimes a corner must be cut because a problem is simply too difficult. And often these are not isolated exceptions. It's possible to write enjoyable AI that cuts an awful lot of corners as long as you know the right corners to cut.

For example, a bot can get away with almost anything if the player can't see the bot. The amount of cheating a bot can get away with dramatically decreases once the player actually watches the bot, in the same way that a magician's tricks only work when you watch the hand he wands you to watch.

The real key to writing good AI is learning which corners to cut and which problems must be explicitly solved in a human-like manner. In other words, it's about determining when to write bad AI and when to write good AI. In BrainWorks, I certainly erred on the side of too much good AI, but I think this is still a long term benefit. Once someone has solved the problem (and released the source code), everyone can reap the benefits and we can all move onto harder problems. But actually writing artificial intelligence that doesn't cheat about anything is an excruciatingly difficult problem. An AI that does everything like a human does would appear to observers to be a human. It could pass a Turing Test, the holy grail of Artificial Intelligence research.

When I say "Holy Grail", I mean it in more ways than one. Even the most optimistic estimates on when we'll be able to create a truly artificial mind say it won't be done for decades. My personal estimates are that it will be done in 300 to 1000 years. And some people claim that it is literally impossible to create an artificial mind, that it so complex that it can only be evolved or created by God (depending on your philosophical preference).

If you want an example of what roughly 700 years of research will give you, the atomic bomb was created only 500 years after the birth of Leonardo da Vinci. Think about how much science advanced in those 500 years. While that time was centuries long, it was punctuated by thousands of little scientific discoveries about how the universe worked. Einstein couldn't have had his flash of insight about how Newton was slightly wrong if Newton hadn't done work to come up with equations that were 99.9999% right.

I believe the next 700 years of AI Research (give or take a little) will be similarly punctuated. Humanity's journey to the creation of an artificial mind will be punctuated by little advancements. No one person could do all the work necessary to design everything needed for a true artificial mind, but every time an AI designer chooses to do a bit of "good AI" rather than cutting corners with "bad AI", we get one step closer to the final goal, even though the destination is light years away. I will not fault anyone for writing AI that cheats; I've done the same. But writing AI that cheats even a little less than usual is worthy of high praise.

Monday, May 5, 2008

The Man Behind the Curtain

Last week I wrote about the Christian philosophy I had. The basic conclusion was that if someone has sufficiently strong reason to believe in a religion, nothing external from that religion can ever make them lose faith. Rather you will see them modify the fringes of their beliefs to fit whatever counter-evidence they encounter, without changing the core beliefs.

For example, a Christian may start claiming that God created the world in seven days. When presented with scientific data demonstrating the world is older than 6000 years, they might claim Satan just planted the dinosaur bones so deceive people into believing God doesn't exist. (Yes, I've heard someone present argument in all seriousness.) The more rational believers can apply Occam's Razor to this situation and conclude that maybe "day" isn't the right translation, and perhaps end at the conclusion that God used evolution to create humans. But no amount of external (scientific) evidence will convince a believer that their religion is wrong. I was familiar with all of those arguments and they didn't make me lose my faith in God. Only an internal inconsistency could fracture my faith.

I moved to California in the middle of 2000, a year and a half before I started work on BrainWorks. I started the project by programming on evenings and weekends, but after almost a year I realized that writing truly good AI would take more time than that. My wife and I talked about the options, and after a lot of time spent in prayer, we came to a rather dangerous conclusion. I decided to quit my day job and work on BrainWorks full time as a way of building my portfolio, live off savings, and then look for a job in the game industry. Even though this was a crazy idea, we felt assured by God in our prayers of four things:
  • God would provide for us
  • God would vindicate this audacious decision to others upon completion
  • I would truly meet God through the work I did
  • I would finish the project before we left the state of California
In the years following, we would return in prayer during difficult times, and also ask for the prayers from others, and we were continually encouraged when we heard these promises repeated, both in our minds and from the words of others.

Concurrently, there were other issues we had lifted up in prayer. Most notably I had back problems for over a decade. Having tried multiple chiropractors, doctors, physical therapists, exercises, and received countless prayer on the issue, I was at a loss. Nothing seemed to work. But I believed in a God that could do real miracles, and healing my back was certainly not the hardest of them. After years of prayer, I kept hearing that God really was going to heal my back, at the proper time, and that receiving healing was about more than just freedom from physical pain. It was about encountering God in a deeper way, which is why the timing mattered and why God didn't just heal me immediately. We felt like God spoke to us that he would give me full healing for my back before we left California. In some sense, our sojourn in California was an opportunity to witness the miraculous works of God, in my work and in my life, and to encounter God.

So what happened? In case there is any doubt about the story's outcome, let me set the record straight. While we did have enough to live while I was unemployed, I did not "encounter God" through the work. I received a job offer in Boston in January of 2007, which my wife and I prayed about and decided I should take. I did not complete BrainWorks before I left California, nor were my back problems healed. In our cross country drive from Los Angeles to Boston, the two of us were faced with a rather uncomfortable situation. Several things we thought we heard from God, including the prayers of others, all agreed with each other, but they were all wrong. That's really not supposed to happen. At the end of that drive, we came to one undeniable conclusion:

We are not very good at hearing from God.

Maybe the reason is that there is no God, or maybe just no Christian God. Or maybe the Christian God exists, and the problem is with us. When we arrived in Boston, we set aside one month for God, each day praying, "God, if you're real, undeniably reveal yourself to us." Since we are very bad at hearing from God, if we are to have a relationship with God, the effort must be his. After one last month, we still experienced nothing. And at that point I came across one truth, written in the Bible, that forever shattered my faith.

If you're not familiar with the Old Testament, one of the most common themes is the warning against idolatry, the worship of false gods through graven images rather than the one, true God, the "I am". The Hebrew scriptures are littered with hundreds of these warnings, but the best story is from 1 Kings 18, where Elijah confronts the prophets if Baal. Here is an excerpt:
Elijah went before the people and said, "How long will you waver between two opinions? If the Lord is God, follow him; but if Baal is God, follow him." But the people said nothing. Then Elijah said to them, "I am the only one of the Lord's prophets left, but Baal has four hundred and fifty prophets. Get two bulls for us. Let them choose one for themselves, and let them cut it into pieces and put it on the wood but not set fire to it. I will prepare the other bull and put it on the wood but not set fire to it. Then you call on the name of your god, and I will call on the name of the Lord. The god who answers by fire—he is God."
Naturally the prophets of the false god Baal have no success, but Elijah has success:
At the time of sacrifice, the prophet Elijah stepped forward and prayed: "O Lord, God of Abraham, Isaac and Israel, let it be known today that you are God in Israel and that I am your servant and have done all these things at your command. Answer me, O Lord, answer me, so these people will know that you, O Lord, are God, and that you are turning their hearts back again." Then the fire of the Lord fell and burned up the sacrifice, the wood, the stones and the soil, and also licked up the water in the trench.
The basic argument is always the same. What separates idols from God was the success rate. You should worship God because God is real and will provide for you. A true God produces results and a false God does not. After all, what purpose does God have if he can't make an impact on you? When I remembered this, I could not escape its conclusion in my life:

God is my idol.

I could not deny the promises that hadn't come to pass, and those promises that had were easily explained by things other than God. At the very least, the Bible I believed in condemned my God as an idol, since it also did not produce results. I had peeked behind the curtain, expecting to see the magnificence of God, and instead encountered a mirror. All this time, I was the man behind the curtain. The things I thought I heard from God in prayers were just my own desires and fears. I am not good at being God.

To continue worshiping this God would be worshiping an idol, something the religion condemns as an obviously pointless and worthless activity, but it was the only God I had ever known. From that point on, I knew I could no longer be a Christian. I realized Christianity did not have practical applications in my life. But that's different from saying Christianity is ideologically false. Next week I'll explain how I concluded the core ideology of Christianity was flawed, and how I went from an outcast of Christianity to living in the freedom of heathenism.

Monday, April 28, 2008

In the Cleft of the Rock

As I mentioned earlier, I was a devout Christian when I started programming BrainWorks, and now that I've finished, I am an agnostic with no particular religious leaning. My work in artificial intelligence was not the only reason I gave up my religious faith, but it is part of the reason. Moreover, it is a story worth telling, and I hope it will help both Christians and non-Christians gain a greater understanding of each other, something that is sorely lacking in this world where rational justification often takes a back seat to dogmatic conclusions.

To understand the process of giving up my faith, however, you must understand the faith I had. Many people who call themselves Christians, perhaps most, don't have much to do with the actual tenets of the religion. To the average Christian, being a Christian means that God and Jesus love you, and Jesus died for your sins so you that when you die, you go to Heaven instead of Hell. In other words, Christianity is a nice feeling in your heart and your insurance policy for when you die. It has little impact on your actual life, except for a general imperative to "do good things", which many Christians ignore, or act as if it only applies to other Christians.

I was never this kind of Christian.

My core belief has always been in causality, and by extension rationality. I have always believed that if something is true, you can trust in its consequences to be true as well. So I believed in the core tenet of Christianity, which is:
Jesus died to pay for my sins and rose from the dead so that I could receive God's spirit and thereby have a loving, personal relationship with the God in this life, and be with God in heaven after I die.
And I believed everything that logically follows if this is true. So yes, I believed that God can and does do miracles, even in this day and age. Real miracles too, not "I saw the face of Jesus in my pop tart". Stuff like "I was born blind and now I can see". I believed that people can and did hear from God, and that if you pray to God, he hears you as well.

Note that this is fundamentally different from saying that the Bible is the 100% true, infallible word of God. There are some clear logical inconsistencies, and I just wrote that off as "I guess they didn't hear correctly from God about that part", or "someone corrupted this text for their own purposes". The most glaring example is the book of Daniel, which purports to be written during the Babylonian captivity. However, the book contains words whose linguistic roots trace back to Persian empire, a culture the Israelites didn't encounter until well after Daniel died.

There were also some religious statements that don't logically make sense. The traditional explanation for these things, such as "don't eat pork", is that they apply to older cultures for some reason but don't apply to the culture of today. However, I took the stance that maybe people just didn't hear correctly from God and the Bible's authors recorded the wrong thing.

For example, even as a Christian I did not think homosexuality was immoral. There is no logical reason that consenting sex between people of the same gender would be wrong but between opposite genders would be okay. "God hates it for some reason" isn't a good argument. According to the book of Jeremiah (Jeremiah 7:19), things that are wrong because they are detrimental to humans. There is certainly nothing humans can do to injure God! Our actions can only harm humans, so if an action does not hurt anyone, it cannot be immoral. Consensual sex falls into this category regardless of gender.

So as a Christian I had no problem admitting that the Bible flawed and even wrong about some things. I disagreed with some very popular Christian opinions, and not just regarding homosexuality. But I still considered myself a Christian, because you can still believe the Bible is wrong about some things (bacon, homosexuality) and right about other things (Jesus died so God can have a relationship with me). I still believed that I could hear from God, talk to God, and witness the supernatural miracles of God.

But around one year ago everything changed, for two very different reasons. One reason convinced the pragmatist in me and the other convinced the idealist, and the two arguments together made me undeniably admit that I had been very, very wrong for the past 30 years of my life.

Please believe me when I say that I was not looking for a reason to leave my religious faith. As a Christian, I felt totally in love with God. To this day, I have never felt more joy in my life than during a Sunday worship service, singing hymns and praises. Being confronted with the irrefutable reality that I was wrong about God was agonizing and heart wrenching, and met with many tears. For months I felt devastated, and I still wonder if I will ever find something that brings me as much joy. But I had no other choice. I do not worship God; I worship truth, and by extension reality. If the truth is that there is no God, or at least no Christian God, then my only option is to act on that truth.

I am confident that most atheists and agnostics do not have the faintest idea how much safety and security a devout Christian gets from their religion. From the agnostic point of view, it's easy to think of Christians as weak minded for taking so much on faith and so little on reason. If a Christian doesn't change their mind when presented with solid reason, the conclusion seems natural. But it's difficult to comprehend the amount of mental and emotional security that even a religion brings, even a false one. The sheer fear of being wrong is enough to make most religious people flat out ignore solid arguments to the contrary.

I'm glad I took the red pill, but the blue pill would undoubtedly have been less painful. It is my hope that godless people would have compassion on those who still have religion, and understand that even though many may fear they are wrong, they lack the emotional strength to take the large steps that seem easy for us.

Monday, April 14, 2008

BrainWorks 1.0.1 Released

I've just released the latest update to BrainWorks implementing the fixes I talked about a few weeks ago. Basically there's a much more sophisticated algorithm for tracking and estimating how likely a bot is to shoot a weapon in a given situation. You can download them using the links on the right if you're interested in trying it out. Let me know what you think!

Related to this, I've been thinking about the question of when software is done. In some sense I still stand by my answer in The Attitude of the Knife, which is "when you say it is". The flip side is that as long as you still have ideas and commitment, there's always room for continual improvement. For a simple program meant to meet a specific purpose, such as reading your email, there comes a point where there isn't much more room for feature development. Artificial intelligence isn't like that though. To this day, researchers are still trying to figure out how different areas of the human brain work. And contrary to popular opinion, humans are not the end of evolution. The human brain itself continues to refine and advance itself through generations.

I believe the line dividing things that can be finished and things that cannot is the line of self reference. When the problem is best solved by something that can analyze and correct its own mistakes, a whole new field of issues apply. For an in depth explanation of why this is, I highly recommend the second of three books that influenced my mental framework for understanding the world. That book is Godel, Escher, Bach: The Eternal Golden Braid, and it is about the nature of intelligence. Very roughly paraphrased, the book talks about the mathematical theorem known as Godel's Incompleteness Theorem, which says that any system capable of describing itself can describe statements that are true but unprovable. Originally the theorem was discovered in attempts to work out some issues in Principia Mathematica, an attempt to derive all mathematical truths from first principles. However, the incompleteness theorem sheds unanticipated insights on the areas of philosophy and by extension the nature of thought and intelligence.

Viewed in the context of intelligence, you could conclude that there are things which an intelligent person would do, but there is no describable algorithm that could conclude what those things are. Perhaps this represents acts of creativity, intuition and insight. Or perhaps those things are describable, but other things are not.

Applied to Artificial Intelligence, it means that there are some aspects of AI that we cannot solve; we can only approximate. And there's always room for better approximations. This is the real that AI development can last forever. You're never really done. At least not until you say you are.

Monday, March 31, 2008

Parenting

Writing artificial intelligence is a lot like being a parent. It requires an unbelievable amount of work. There are utterly frustrating times where your children (or bots) do completely stupid things and you just can't figure out what they were thinking. And there are other times they act brilliantly, and all the effort feels satisfying and well spent.

Fascinatingly enough, people rarely ask the question of whether being a parent is worth the effort. There's an implicit belief that once you grow up and get married, you should have children. It's like the option of not having children doesn't exist. And of course once you do have children, you must do everything in your power to raise them as well as you can. Questions of whether becoming a parent is good idea, or how much time should be invested in children rather than your spouse are discouraged if they are even asked.

Now I'm not suggesting that parents shouldn't spend a lot of time parenting. Once you've committed the next 18 years of your life by having a child, it seems like you should live up to the responsibility you create for yourself. I'm just wary of people claiming there's only one answer when they aren't even asking the question. Doing the right thing for the wrong reason might cause you to do wrong thing later, as situations change. A good example from parenting would be an overt amount of hand-holding once your child goes off to college. If you always believe you should give 100% to help your kids no matter what, you might end up doing their laundry and cleaning their room when they turn 25. Certainly the purpose of parenting is to turn children into adults, and at some point good parenting involves letting your child being an adult. So the core question of, "Is it worth the trouble to have children?" is a real question, and the answer isn't always yes, although many people assume it is.

What seems strange to me is how people answer the related question, "Is it worth the trouble to program good AI?" And they almost universally say, "No". There are countless games where the AI overtly cheats to win-- pretty much every real time strategy game. Almost every first person shooter to date has had massive problems navigating around a level, Quake 3 included. Most squad-based AI is exploitable by any player who has encountered it a reasonable number of times. And the solution of heavier scripting might make AI seem more realistic when first encountered, but it's far less realistic every other time. Companies resoundingly see little financial benefit in creating genuinely good AI.

I find these answers very much at odds with each other. Writing good AI requires more intelligence than raising a child well, but certainly less time. Moreover, good scientific research results are never lost. They can built upon for centuries. They still teach Newtonian mechanics in colleges, even though we know Newton was technically incorrect (but close enough for most practical purposes). There is a lot of potential long term value in designing good AI, even if the short term profits aren't there. Similarly, the cost of raising a child these days is easily over $100,000. That's a huge short term investment that, quite frankly, never pays off for a lot of children in long term societal benefit.

The real conclusion is that economic and scientific valuation have almost nothing to do with the actual choices people make. When someone says, "Is it worth it to raise children?" or a company asks "Is it worth it to make good AI?" they are using different definitions of worth. The company wants to make money but the potential parent wants to enjoy life.

People don't do what makes them the most money. People do what they love!

Why are there over six billion people in this world? Because people are genetically programmed to love having children. The existence of hormones that encode these feelings of love doesn't make the love any less genuine. Children require effort, but the love for children overrides the dislike of effort (for most people, at least). Without the love of children coded into the DNA of humanity, good parents might become as rare as AI programmers. There's not a lot of people who love designing AI.

Of all the things I have learned designing AI, the most important is this:

Find what you love doing, then do it.

That said, there are surely people who would love life more if they didn't have children. And there are companies that make money by writing good AI. I would love to see more people enjoying life, even if that means going against societal trends and not having children. I would also love to see more companies making money by writing good AI. Not every company can or should do that, but I believe there's room for advancement in both areas.

Monday, March 3, 2008

Good Enough For Government Work

This might be a surprise if you've read my post on pragmatism, The Attitude of the Knife, but I'm actually an idealist. I don't want things to be good enough; I want them to be perfect, or at least as perfect as pragmatically possible. I use the attitude of the knife to temper my idealism.

Of course, there is tension between idealism and pragmatism when encountering very difficult problems. I don't just mean, "you'll have to be really smart to solve this problem." I mean, "it's mathematically impossible to solve this problem using the computer you have access to." The idealist wants 100% and the pragmatist wants 99% or even just 90%.

The classic example of a mathematically difficult problem is the Traveling Salesman Problem. If you're not familiar with the problem, it works like this: Suppose you have a salesman who wants to visit every city in a certain area exactly once, and he knows the distance between each pair of cities. What's the minimum distance he'll have to travel to visit all the cities and how do you determine it?

Mathematicians have been studying this problem for almost 200 years now. It turns out that determining the perfect solution is really, really hard, but it's possible. Worst case you can just check every possible ordering of cities. That worst case is unbelievably bad, however. If the salesman had 30 cities to visit and a computer could test one trillion options per second (current no single computer exists that is this fast), testing all these options would take much longer than the current age of the universe. Most of the studying mathematicians have done on this problem revolves around ways to get a "pretty good" answer without spending billions of years to do so.

I say I'm an idealist, which means I want things to be perfect. But I'm also ruthlessly pragmatic. These contrasting ideals intersect on the definition of perfect. If it takes 1 trillion years to find the traveling salesman route that takes 38 days but 1 hour to find the route that takes 42 days, then the 38 day solution isn't perfect. It's really 1 trillion years plus 38 days, which is definitively worse than 42 days, 1 hour. A "perfect" solution must take into account the constraints surrounding the problem and not just the problem itself. The mathematically optimal solution is not the perfect solution expressly because it's not practical enough to use.

What does this have to do with Artificial Intelligence, and specifically item pickup? Deciding what items to pickup is very similar to solving the traveling salesman problem, and that's bad news for me as the AI designer! If there are 20 items on a level, a bot wants to pickup between 0 and 20 different items on the way to its final destination. That's similar to a traveling salesman wanting to visit exactly 20 different cities. And since BrainWorks bots can't spend several trillion years to decide what items they're going to pickup, their item pickup code cuts a lot of corners to get a pretty good solution in a reasonable amount of time. This isn't the mathematically optimal solution, but it is a better choice overall.

Here are some of the tricks BrainWorks uses to reduce the computation time:
  • Nearby items are grouped into a single cluster. Bots consider picking up the entire cluster at once. This reduces the effective number of things to consider.
  • Bots do not consider picking up more than four items at once before going to their final goal.
  • Bots only consider picking up the dozen or so items that are relatively near their current position, or not far off the path to their destination.
  • When the bot is very near an item (less than a second away), it automatically picks it up rather than doing a full computation.
These changes have a profound effect on the size of the search. Suppose a level with 40 items gets bound into 25 clusters and the bots never consider more than 4 total pickups from the 12 nearest items. Each item pickup decision require testing at most 800 options, rather than more possibilities than there are people on planet Earth.

It's rare that thinking 10 item pickups ahead would really help the bot more than the first two or three choices. And usually the bot doesn't want to travel halfway across the map to pickup some random item. Only the nearby items are worth considering. For the one or two items that really might be that good, the bot specifically notes these items and includes them in the list of possible pickups, no matter how far away they might be. And as it turns out, these results are still "very good". Often the theoretically best solution isn't the best for the problem at hand. If you idealize the results and not the method, being idealistic involves selecting the most pragmatic solution.