Showing posts with label idealism. Show all posts
Showing posts with label idealism. Show all posts

Monday, November 2, 2009

Badass

I love the internet! It's such an amazing storehouse of interesting information, much of which is even true. A while ago I stumbled on the website www.badassoftheweek.com. Every week the site features the linguistically embellished biography of some person who lived a legendary life. By and large they are factually correct (in the case of real badasses) or canonically correct (in the case of fictitious ones).

A smattering of the many biographies:
For some reason this site really tickles my fancy. It's the intersection of two things I both love and appreciate: learning and kicking ass. Take this excerpt from the article on Leonardo da Vinci:
I can't overemphasize how goddamned ridiculous it is that Da Vinci conceptualized the freaking helicopter at a time when most people were riding around on donkeys and using a sundial to approximate the time of day. Seriously, the freaking printing press was considered cutting-edge technology in these days, and Da Vinci was one step away from dusting Versailles in a goddamned Apache Gunship.
I suppose I could make some point about how "even you can be a badass," but lets face it. That's probably not true. The whole reason stories about these people are worth telling is that the average person simply cannot do what they did. That's what makes the story legendary, after all. You are not Johann Sebastian Bach. You aren't Prince. You aren't even Hannah Montana, and you never will be. Neither will I.

But that doesn't bother me. Humanity has done some truly impressive things, including walking on the surface of the moon and returning to talk about it. It's worth taking the time to be legitimately impressed, to really understand what makes these feats of strength so difficult and how certain humans triumphed over them anyway.

Monday, October 19, 2009

We Can Worry About That Later

As I've been in the middle of purchasing a house, my life has been extremely hectic. The past several weeks have been filled with housing inspections, mortgage application, reviewing and signing legal documents. And naturally every detail has someone who wants to renegotiate it. After closing, we'll still have painting, moving, decoration, and furniture purchases to handle. My life is really stressful.

A few days ago my wife and I were reviewing the upcoming task list and for one item I mentioned, "we can worry about that later". I then realized I use that phrase an awful lot, and to me it means "we can handle that later". In other words, to me worrying is synonymous with work. Put negatively, I don't stop worrying until the work has been completed.

It was an excellent moment of self reflection. I simultaneously realized why I'm so driven, intense, productive, and stressed. This life attitude has its benefits, but it's certainly not healthy in the long term. A few weeks ago I wrote about how the secret of a successful marriage is reducing stress. Perhaps it's better to say:

The secret of happiness is reducing stress.

After all, what else is stress but discontent about the possible future? It's a tricky balance though. If you live life in the future, you'll solve all these potential problems but always be tense as a result, never enjoying the present. If you live in the present you'll enjoy it only until something happens that you should have dealt with. How do you focus on the present while building your future? Personally, I feel like I don't balance these constraints very well.

So I've been thinking about different ways to manage stress. Some common things people try include:
  • Eat and/or drink
  • Have Sex
  • Exercise or spend time outside
  • Sleep or practice deep breathing
  • Read a book, watch a movie, or play a game
  • Daydream or imagine good things
  • Procrastinate by doing less important work
  • Remove the source of stress
Personally I spend a bit too much time on the last two items. Classifying these options, they seem to fall into one of three categories:
  • Solve the issue
  • Ignore the issue
  • Accept the issue
Unfortunately if the only mechanism for relieving stress is to solve the problem, you're in for a rough life-- there's always something else you can worry about, and many things you can't fix Ignoring issues seems fine for small problems. And acceptance is the only option available for problems too large to be solved or ignored.

I believe the real secret to happiness is properly identifying which problems should be accepted and which should be solved. And then realizing that most problems are of the former type. It's easy to get caught up in trying to fix everything, especially as a perfectionist. But the more you genuinely accept misfortunes as Not A Big Deal, the more you can enjoy the truly good things in your life.

If that's true, then the real secret to happiness is forgiveness.

Monday, October 5, 2009

Squeezing The Margins

My father has been a business consultant for decades, my mother is a certified financial planner, and my brother has run several small companies. So as you'd expect, my family talks a lot about money-- both now and when I grew up. Much of the discussion is about how to make the most money.

For example, if you have a savings account that earns 3% a year and have a credit card balance that costs you 10%, you have no reason to keep anything in savings until that credit card is paid off. Not paying off the credit card costs you 10%, bringing the net return on savings to 3% - 10% = -7%, or a 7% loss. The 3% return isn't the full story on investment. You also lose 10% in "opportunity cost", since saving the money means you forgo the opportunity of paying down debt.

On the other hand, if you have some subsidized government school loan with a 4% interest rate and you are earning 5% on average from savings, you will make more money by paying the loan off as slowly as possible, putting all additional money into in savings. When you put $100 into savings rather than paying this loan, you owe 4% more in loan interest ($4) but earn 5% more in savings interest ($5). This is a net gain of 1% ($1).

I've heard all kinds of other, more complicated ideas to leverage money for potentially greater returns. All the ideas come down to this basic concept though:

Maximize the spread between your expected investment interest rate and your debt interest rate.

If you could get a loan with a 7% rate that funds a business with an expected return of 8%, then that's 1% better than not doing anything at all. In theory.

The problem is that more complicated investment strategies have a greater chance of going wrong. There's simply more potential source of failure. In the real world there is no such thing as a guaranteed 8% return on investment. More typically what happens is that there's a pretty high chance that there will be between say, 6% and 10%. And there's a very low chance you'll get a 0% return or even less. So maybe you'll get lucky and get a free 3%. But another possibility is that the investment will give 6%. The business would consider this a success, but it cost you 7% to make this investment, so you're down 1% for your effort. And there's always the chance of a catastrophic failure, meaning you lose most or all of your investment.

Lately I've been rethinking this strategy of squeezing the margins. The increased risk is certainly part of it. Mathematically you will maximize your money if you go from a guaranteed $1,000 to anywhere between $900 and $1,400 most of the time and occasionally $0. But maybe maximizing your money isn't the end goal of life. The potential life impact of losing between $100 and $1000 is probably a whole lot worse than the best case scenario of gaining $400.

Additionally though, more complicated investments just take more time to manage and stay on top of. The purpose of money, at least in my life, is to give me more time and reduce my stress. Working hard for an extra 1% isn't worth it if I can't turn that money back into the time I spent to get it. Squeezing the margins isn't the key to financial security, though it does maximize your odds of getting lucky and striking it rich. Here's the strategy I use:

Reduce, Reuse, Recycle

This is the recycling motto. Most people don't know this, but these steps are ordered by importance. In other words, the most important thing you can do is reduce the amount of things you obtain that need to be recycled. The second most important thing is to reuse the things you have. And if can neither do without them nor reuse them, then as the last option you should recycle them. Sadly most people these days focus on "recycle" and ignore the other two.

This mindset applies just as well to finances though, and with the same priority scheme:
  • Reduce: Don't buy things you don't need
  • Reuse: Don't have money in so many places you can't keep track of all of it
  • Recycle: Put your money in the place it gives the highest return
That third rule is the same one as "squeezing the margins". Squeezing gets you into trouble when it violates the second rule-- once it takes a lot of effort to manage your investments, you are setting yourself up for a fall. But most importantly, the secret of wealth is not buying things you don't need.

Monday, August 10, 2009

Follow Your Dream

I've been watching some presentation videos on TED.com from their yearly conferences and as I scientist, I've found them very inspiring. There was a demonstration of a cheap water filter that could be deployed across the entire globe and seriously reduce infectious diseases that billions of humans are currently in danger of. Deaf percussionist Evelyn Glennie talks about music and listening. I watched a video of Elaine Morgan talking about the major differences between humans and other great apes. She makes a compelling argument for humans evolving from a species of apes that lived in water.

But most of all I appreciated this talk about living a passionate life. The speaker tells a story of a Yugoslavian Jew who escaped Nazi Germany and at every stage in his life found the resources to make a profound impact on the region he was living in. After watching these videos, I found myself very motivated to make an impact on the world at large. I believe that in several decades, I too could present something of value, and if I don't do that, then perhaps I'm not living up to my potential. But then I wondered if my motivation was misplaced. Michael Pritchard worked on a water filter because he was dismayed by the aftermath of Hurricane Katrina, not because he wanted to "make an impact on the world".

I rethought all the presentations, trying to find the common thread. What did these these people have in common that helped them accomplish their dreams? They all have some raw talent, but primarily I got the sense they were strongly motivated, and their motivation was directly for the the task they wanted to accomplish. Evelyn Glennie wanted to be a musician even though she is deaf. Through this she has become inspiring, but her motivation wasn't to be an inspiration.

Rather, I see these characteristics making up their stories:
  • Be motivated by the dream itself, not the result of the dream
  • Make plans on the scale of decades but don't follow them exactly
  • Don't ever give up when facing opposition
All of these people are somewhat talented and have a clear wealth of motivation. There weren't any presenters who were totally brilliant but only somewhat motivated. Success is about persistence. Each person had enough motivation to focus on something that would take years or decades accomplish. They all got discouraged and did not quit. When something stood in their way, they changed their plans to get around that obstacle. This leave us with two questions to answer:

What things do I care about?

How can I increase my motivation?

No two people have the same answers for these questions, and I believe you can only find these answers through introspection. In general though, people are motivated by positive feedback, concrete objectives, and small, discrete tasks. Things get hard when there aren't any positive results, the objective isn't clear, and the tasks can't be easily divided into portions.

This suggests that successful people are intrinsically optimists with a healthy dose of realism. They need to believe things will work out even when there's no feedback suggesting that while remaining grounded enough to change plans to get around adversity. Too much optimism and you'll stubbornly try the same thing and fail. Too little optimism and you'll just quit.

The good news is that this provides a metric for determining what you need to change to achieve your dreams. If you try things and then quit, then you need to become more optimistic. If you keep trying the same thing and it doesn't work, you need a healthy dose of reality to see why your original plan didn't work and how to revise it. If you try different things and they don't work, expand the realm of your search. But don't ever give up on the things that truly matter to you.

Monday, July 27, 2009

Only When It's Funny

As I believe the purpose of life is to be enjoyed, making people laugh to be one of the most important things I do. That's ironic, given that this blog is particularly unhumorous. Don't worry thought; that's not a mistake I intend to correct today. Rather, I want to share a bit more about humor and what makes things funny.

When I was a kid, I saw the movie Who Framed Roger Rabbit. While it's full of cliche and loaded with cheap slapstick gags, it also includes a surprising amount of sophisticated humor and wit. I'll always remember one line from the movie though. Roger Rabbit is wanted for murder and trying to avoid the police. He inadvertently handcuffs himself to detective Eddie Valiant. The two finally make their way to a safe location where Eddie gets his hands on a hacksaw to remove the cuffs. If you want to watch how the scene unfolds, here it is.

Roger just slips his hand out of the cuffs and asks if that helps. Eddie is clearly pissed and asks if Roger could have done that at any time. Roger responds:

No, not at any time. Only when it was funny!

I love the implication that the toons don't have complete and total power over reality. They can only break the laws of physics when its funny to do so. For example, Wile E. Coyote is only allowed to run off a cliff and walk on air when he doesn't notice he's not standing on solid ground anymore. Once he realizes that he's supposed to fall, he falls.

Furthermore, Roger's response speaks to the value of comedic timing. Things are funny in large part because of their context. If you watch any professional comedian, you'll see a clear difference between waiting one and two seconds before delivering the next line. Some responses would simply be less funny if the response was one second sooner. John Stewart of Daily Show fame is a particularly good example of this. Comedians with perfect timing learn just how long they need to wait before their audiance comes to a certain mental conclusion. The humor derives from constrasting the comic's next statement with your current thought, and if the line is delivered too soon (or too late), it doesn't provide the right contrast.

Of course, knowing that timing matters isn't the same as knowing what to say or when to say it. No two people laugh at the same things either, so humor is a very personal thing. What makes something funny anyway?

I believe all humor reduces to cognetive dissonance between expectations and reality. When you expect life to be one way and it's different, your mind has a few possible responses. You either get offended or you laugh about it. Which response you have depends on how much you care about the topic. For example, suppose you mention that you've been feeling a bit sick for the past few days and your friend deadpans, "It's probably swine flu." Whether you chuckle depends on whether you actually think you have swine flu.

Not to be pedantic, but it's worth analysing this joke in detail. The crux is that people overestimate the probability and danger of catching swine flu. Your friend is pretending to be one of these people who overreacts, pointing out the reality of these people. Rationally we know that the odds of actually catching it are extremely low-- more people die of the regular flu than swine flu. So in an ideal world, people wouldn't be worried about it, and that's the source of cognitive dissonence. The joke boils down to, "people are worried about swine flu but shouldn't be."

If you agree with that statement, you'll laugh with your friend. But if you think people aren't overestimating the deadliness of an epidemic which has claimed far fewer lives than car accidents have in the past six months, you'll be offended. Your friend's joke became a criticism of your own perspective.

And that's the beauty of comedy. Laughter is a reflection of what we consider unimportant, and the vast majority of our lives really don't matter. A 14 year old girl might be mortified for farting in class, but she would be a lot happier if she could laugh like the rest of her classmates. After all, no one cares as much as she thinks they do. If you want to be happy in life, you have to be humble enough to accept that you just aren't that important. Only then can you laugh at just how crazy and awesome this world really is.

Monday, June 29, 2009

Optional Law

Imagine there was a law that required everyone to pay a $100 tax each year. However, there is no way to find out who paid their tax and who didn't. All the government knows is the total amount of money that comes in and therefore the percent of people paying the tax. Is avoiding this tax ethically justifiable?

I don't mean to suggest that actions are only unethical if you get caught; I don't believe that is the case. When 10 people commit similar crimes and only 9 get caught, that doesn't make it right for the guy who got away. Rather I'm asking whether it's ethical to break a completely unenforceable law. I'm not sure, and perhaps it depends on how heinous the forbidden action is. But our perceptions of right and wrong change over time.

As a concrete example, consider the anti-sodomy laws that many American states had before they were struck down as unconstitutional. These laws restricted the sexual conduct of consenting adults and were virtually impossible to enforce. In the centuries since these laws were first passed, the majority public opinion no longer regards oral and anal sex as immoral.

The general argument against unenforceable laws is that they only punish people honest enough to follow them. In the tax example, both honest and dishonest people would get the benefit of however the nation spent the tax money, but only the honest people paid for it. This is not fair, though even unfairness does not necessarily make ignoring the law ethical. The argument also gets murkier when the law is more than 0% unenforceable, and when the prohibited action has a negative impact on society as a whole. Murder is wrong even if it could never be enforced, for example.

A more recent example is the set of laws against unauthorized music downloads. Music companies don't want people using the things they produce without paying for them, but modern technology has made it very easy for people to do this anyway. The basic structure of the problem looks like this:
  • Party A (the music company) owns the rights to some information.
  • They allow party B (the purchaser) to use this information, but not to distribute it.
  • However, they cannot prevent party B from giving it to party C (the downloader).
At first, music companies tried to stop downloaders, but it's really too difficult to figure out who they are. So the latest tactic has been going after purchasers who distribute the music online. Even that has been very difficult, often requiring subpoenas to ISPs, and these tactics have not substantially increased music company profits or decreased illegal downloading.

The question is whether it's ethical to download music you haven't purchased. There is not much benefit to society from unauthorized downloading, but there is also not much cost. It's true that record companies lose some money from people who would otherwise have paid, but not every downloader was a potential customer. Plus the increased exposure from free downloads has in some cases turned into more paying customers, though I suspect the net effect is still a loss most of the time.

At any rate though, the real problem is that music companies are distributors who can no longer control their product distribution. No amount of legislation can make this business model profitable again; they need to find a new way to add value to customers before customers will give them money again. I'm undecided about whether illegal downloads are ethical. That said, music companies need to suck up the fact that it will happen anyway and stop trying to fix unsolvable problems.

Of course, it's easy for consumers to tell that to music companies, but harder to take that advice yourself. Here's the grand irony: The same people who support legalization of free online music are generally opposed to unauthorized government surveillance. When it's someone else taking their own words without permission, they strangely don't see freedom of information in the same light.

But the government's wiretapping program has clear parallels with music downloads:
  • Party A (music companies or a citizen on a phone) is the owner of information.
  • Party B (music purchasers or the telephone company) was given this information with the understanding it would not be given to people that A did not authorize.
  • Party C (music downloaders or the government) was given the information by party B anyway.
Similarly, it's not clear what the costs and benefits to society are. The government might be making the country more secure, or maybe the information doesn't help. No one wants to have their privacy invaded, so clearly there is a cost, but I'm not convinced the cost is freedom. Using the information as a way to ferret out "unpatriotic citizens" isn't that practical given the volumes of data that need to be analyzed by hand. Government agencies only have so much money to spend, and the biggest threats to national security are not unpatriotic ideas-- they are crazy people with bombs.

The biggest societal danger is of high level politicians trying to access information from specific domestic adversaries and using it for personal gain, but you don't need a massive government program to cull information on just a few thousand civilians. While this danger is real, I suspect it existed well before the Bush administration set up this widespread surveillance program.

So just as I haven't made up my mind about music downloads, I also can't decide on warrantless surveillance of citizens. They seem similar enough that they should both be moral or both be immoral. It's possible to argue for one and not the other, but that's a difficult argument to make; I'm not sure what that argument would be.

But there's one thing I am certain of:

Like the music companies, we should worry about solvable problems, not unsolvable ones.

No amount of legal pressure will have any impact on warrantless wiretapping. If you really care that the government might find out that you're meeting friends for drinks, then don't tell people that over the phone. Certainly you should stop posting it on Twitter. Or alternatively, learn to care just a little less about privacy. In the grand scheme of things, none of us are really that important, so you might as well be happy instead.

Monday, March 9, 2009

Love Thy Neighbor

Earlier this week, a reader made an interesting comment on my God of Stone post. Bryan analyzed the logic in the post and pointed out that it seemed to argue both in favor of and against revisionism in religion. My argument is actually progressive in nature. I favor revision religion when it's an improvement for humanity and I'm against revisions that are not. For example, removing the moral restrictions on eating pork is a good revision. Praying to a statue of the infant Jesus is not.

As a reminder to newer readers, I am not a Christian anymore, although I was a highly devout one for thirty years. For an explanation of why I left my religion behind, you can read the posts In The Cleft of the Rock, The Man Behind The Curtain, and The Emperor's New God. I am now a humanitarian agnostic. In other words, I'm not sure if God exists, but I believe morality and ethics have meaning even outside the context of God.

While I don't agree with everything the Bible says, I certainly understand the perspective of those that do. Having studied the Bible for decades, I'm convinced that the biblical authors had a similar perspective-- that the true measure of morality and immorality is whether an action helps or hurts humanity. Furthermore, religious authorities in the Bible had no problems revising laws when the new law provided a greater benefit to humanity.

The clearest statement of this philosophy comes from Jeremiah 7. Jeremiah has received a message from God for the people is Israel and soundly chastised them for their idolatry and other immoralities. Then in verse 19, it says:
"But am I the one they are provoking?" declares the Lord. "Are they not rather harming themselves, to their own shame?"
In other words, no amount of immorality can possibly harm God. The whole reason things like idolatry are sinful is that they harm the sinner, by pushing them away from a God that loves them and will bless them with his presence.

This is a crucial argument. Given the premises of the Christian religion, it is absolutely impossible to harm God. Because God loves humanity, he wants them to prosper. Therefore, the only things that God considers sin are those things that work against the humans he loves. So if something doesn't harm any portion of humanity, it cannot be a sin.

Things are only sinful if they harm humanity.

I encourage Christians to stop thinking about God as a random set of likes and dislikes. "God liked Jews and hated pigs. Then later he decided pigs were alright, but homosexuality was still bad." That's just irrational. If you believe in intelligent design as most Christians do, then you need to accept that your God is rational and work from there. God likes stuff that helps humanity and hates stuff that hurts humanity. He might have a better understanding of what "help" and "hurt" mean, but that's as simple as it gets.

When you view the evolution of religion as a gradual improvement on rules that benefit humanity, many stories in the Bible make more sense. For example, in Genesis 9:3-4, Noah has just survived the Ark, and God changes the law of what food humans are allowed to eat. Before the flood, humanity was supposed to be strictly vegetarian, but now God says it's okay to eat animals. Why did God revise his law? Animals were no different than they were before the flood, and now there are fewer of them. Either you think this story is factually true, in which case God must have changed his mind. Or you think this is just a story, so at least the Bible's author has revised the religious law. In either case though, the reason for the revision is clear-- the new law benefits humanity more than the old law did.

God isn't the only important religious figure who revised the Judaeo-Christian religion. Jesus did too, in his Sermon on the Mount. Starting at Matthew 5:17, Jesus says:
Do not think that I have come to abolish the Law or the Prophets; I have not come to abolish them but to fulfill them.
He then proceeds to give nearly a dozen different examples of things people should do different from what the Torah (the law from Moses) instructs or permits. Here are some of these:
  • Insulting people is sinful
  • Looking at a woman lustfully is as bad as adultery
  • Divorcing a woman who has not been unfaithful is sinful
  • You should not seek retribution on people who have offended you
  • Love your enemies
The entire Sermon on the Mount was a revision of the Torah, or at least a reinterpretation. If Jesus said he didn't come to abolish the law and then proceeded to contradict the Torah, then the Torah must not be the law he was talking about. The simplest explanation is that the Torah is merely one interpretation of the laws "Love your God" and "Love your neighbor as yourself". This interpretation can be improved, and that's exactly what Jesus set out to do in his Sermon on the Mount.

If God was willing to revise his own law and Jesus could revise the law God gave to Moses, then everyone who believes the stories in the Bible must logically conclude that revising religious laws can be a good thing, as long as the new law does a better job of loving your God and neighbor. Sometimes the Bible got it wrong and needs to be improved with a good dose of common sense. If Jesus did it and Christians are supposed to act like Jesus, they should not be afraid to apply common sense to their religious text either.

And while I'm not a Christian anymore, this is the reason I believe that every Christian should be pro-gay. There is nothing about homosexuality that is inherently harmful to any human. It makes the couple happy, so logically there's no reason it should offend God. If there are rules against it in the Bible, then maybe those authors just didn't hear from God correctly and people should reconsider whether a ban on homosexuality actually embodies "love your neighbor". It's not at all loving to telling two consenting adults that they cannot marry each other just because they are of the same gender.

Monday, December 1, 2008

A Change of Perspective

This past Thursday was the American holiday of Thanksgiving. President Abraham Lincoln instituted the holiday as an annual occurrence, although the story hearkens to a tale in 1621 of how the native American ("Indians") welcomed the British colonists ("Pilgrims") to American, and how the Pilgrims gave thanks to God for the safety of the Atlantic voyage. Over the centuries, the religious factor was de-emphasized and the holiday was rewritten as a story of the Indians greeting the Pilgrims with a feast, and the Pilgrims thanking the Indians for their hospitality.

It's a fabricated holiday in that the actual feast probably didn't occur, but instead captures the spirit of thankfulness the Pilgrims had. These days, Thanksgiving is a holiday to get together with family and (hopefully) be thankful for the good things in your life. So for most people that means travel, a big meal, and the stress of being with people you might not get along with. But you still have the opportunity for thankfulness if you want to take it.

There's a lot of value in thinking positively. I'm not talking about pretending bad things are good, or completely ignoring things that are obviously issues. Rather I'm referring to appreciating the good things, and not letting problems get in the way of that appreciation. When nine things go well and one thing goes wrong, it's easy to focus on the negative-- it's the part that needs your attention. Everyone sees their current set of problems as the biggest mountain in the world, even if it's really just a molehill in the grand scheme of things.

I've written a lot about the importance accepting that things can't be perfect, even though it's good to strive for it. But that's different from being happy. When you work hard on something and some parts work out while others don't, you can accept this fact with a depressed attitude or with a joyful one. The attitude you pick won't change the world at all, so you might as well pick what makes you happiest. Focusing on the positive things can be hard, but it's a simple thing that makes your entire life better.

So with that in mind, I'm taking this opportunity to remember the successful portions of BrainWorks:
  • Highly realistic aiming
  • Intelligent item pickup selection
  • Context dependent weapon selection
  • Awareness and scanning that lets players try to outsmart bots
  • Dynamic feedback systems that let bots learn as they play
If you've read this blog for a while, you'll know there's a few things I'm not happy with. But to me at least, the project was overall a big success. I met or exceeded the objectives I set and I'm very happy with the results of the AI.

Last, working on this project provided an enormous amount of personal growth which I am also thankful for:
  • Increased self-confidence
  • More self-responsibility
  • Freedom from the burdens of Christianity
  • Less perfectionistic
  • More optimistic
  • Greater joy in life
Working on BrainWorks forced me to take an enormous amount of responsibility. I thought my God would come through for me, but things only came together when I took responsibility for myself. I'm sure the Christian readers will say that this was just God's way of building me up in strength. My perspective is that I finally realized there was no Christian God. But if what really happened was God gave me strength by teaching me not to believe or trust in him anymore, then praise God for that, and I'll continue following his instructions of relying on my own strength.

In short, I feel like I've won at life. This isn't the hardest project I'll ever work on, and not everything in my life is perfect. But working on BrainWorks gave me so much joy and freedom that I know I can handle whatever else life has in store for me. I'm still relatively young (barely into my thirtys) and I figured out the purpose of my life. I love making awesome things, and I plan on doing that as long as I'm alive.

Monday, November 24, 2008

Boot Camp

An old friend of mine was a former army drill sergeant. I was surprised to learn this, as he didn't fit the stereotypical personality at all. Thoughtful and friendly are not the first words that come to mind when you hear the phrase, "Drill Instructor". This former job came up when someone mentioned they were leaving for basic training (aka. "Boot Camp") in six weeks, and my friend immediately told the guy, "Start doing push-ups now." Apparently you do so many push-ups in boot camp that it's never to early to get in shape, and the better physical shape you are in, the easier it goes. Not that boot camp is ever easy.

Intrigued by learning that my friend had been a drill sergeant, I asked him about basic training from the sergeant's perspective. According to him, the most important thing you can do to survive boot camp is to be practically invisible. You need to blend into the crowd and never even be noticed by the drill instructors. The instructor's job is to spot recruits that don't fix the mix and make them fit in.

"The instructors don't hate you," he said. "It's just that nothing you've learned in civilian life is of any use in the army, and they need to beat that crap out of you."

The typical civilian doesn't take order very well. Generally orders are considered "loose guidelines". They're likely to ignore orders outright, only do the parts they want, or do things differently based on their preferences. That kind of attitude is acceptable (although not optimal) in real world business settings. In the army though, the lives of thousands if not millions of people are at stake, and defying orders can have large consequences that the enlisted units can't possibly be aware of.

If your army company controls a bridge, but your officer tells you to march one mile down a river and swim across at midnight, then that's exactly what you do. The option of taking the bridge across and walking down the other shore shouldn't even enter your mind, even though your officer never told you why he gave you your orders. Soldiers who disobey orders either get killed or get other people killed. That's why the number one objective of a drill instructor is to teach privates to follow orders.

There's a similar issue for AI in team play video games. If your team contains some bots or possibly "side kicks", the game designers generally give you some way to give them orders. No game designer considers making your minions sometimes ignore orders, of course. Rather, the problem is this: Just as most people aren't used to following order precisely, they also aren't used to giving them precisely. There's a whole set of problems determining what the player even intends when they say something as simple as, "go back through the door". Which door? How far back? So much of language depends on context and it's just hard to determining meaning from a small snippet of words.

The second issue is that no matter what order a player gives, they almost never want the bot to perform that task indefinitely and precisely. When you say "go guard the red armor", it's just until they spot an enemy. When the bot spots an enemy, you don't want it to let them get away if that enemy leaves the red armor spot. The bot should stop guarding and start attacking in that case. Similarly, if no one goes by the red armor for a while, the bot should stop guarding and find something useful to do. When the player says, "Guard the red armor", the bot needs to understand that as "Guard the red armor until you see an enemy or until you've been there for three minutes".

This is the basic strategy BrainWorks uses for its message processing, which is largely unchanged from the original Quake 3 message code. It does the best job to determine what the player means and treats that instruction as a relatively important high level task. But it's still just a guideline. Stuff like combat takes precedence, and the bot will forget about the order after a few minutes. Ironically the bots need to function like civilians rather than soldiers, or the person playing the game wouldn't be happy with how the bots would follow orders.

Monday, November 17, 2008

Ignorance Is Bliss

Having written twice about the dangers of believing everything you are told, I'd like to give some face time to the opposing argument:

Ignorance is bliss.

It's all well and good to say, "You should understand things, not just follow blindly." But to be pragmatic, there is only one time this is a serious improvement: when the blind belief is wrong. What about the times that blind belief is right? Humans survived for centuries without understanding how gravity truly worked and that didn't stop them from creating some awesome things. They were even able to use the fact that "things fall towards the ground" to great effect, such as building water clocks, without ever learning Kepler's laws of motion.

Even if someone did want to totally understand the world today, it wouldn't be possible. There is such a vast corpus of information that learning even 1% of it is literally impossible in the span of a human life. Mathematicians who are experts in their field rarely have the chance to keep up on other branches of mathematics, to say nothing of physics, chemistry, or medicine.

The fact of the matter is that most of the information we've been told is correct. If I get a bus that says "Downtown" as its destination, it really is going there. The driver could take it anywhere but I'm very certain that the destination is downtown. When I order a meal at a restaurant, I take it for granted that the cook can actually make the things on the menu and that I'll be served food that's reasonable close to the description provided. The waitress serves me food assuming that I will pay the price listed. I suspect that less than 1 in 10,000 people really understands what a computer does when you turn it on, but hundreds of millions of people use a computer every day. Understanding is a luxury, not a necessity.

Like it or not, our lives as humans are anchored in faith, not reason and understanding, and this is the cornerstone of the religion well all share: Causality. If we do something ten times in a row and got the same result, we expect that the eleventh time will produce the same result. And it does, even though we rarely know why. Understanding everything is impossible, and the whole purpose of culture is to provide structure so that everyone can use the discoveries other people made.

If that's the case, why bother thinking about anything at all? Why not let someone else do your thinking for you? The primary purpose of education isn't to give people information, but to teach them how to think. And most importantly, to teach them when to think. Thinking is really important in uncharted waters. In any situation that doesn't match your typical life experience, thinking will give you a much better idea of what to do than trying something at random.

Unsurprisingly, the same problem comes up in artificial intelligence. There's only so much processor time to go around. So if seven bots all need the same piece of information and they will all come to roughly the same conclusion, it's much faster to do a rough computation once and let them all use those results. This leads to an information dichotomy, where there's general "AI Information" and "Bot specific information". Each bot has its own specific information, but shares from the same pool of general information. In BrainWorks, all bots share things like physics predictions and the basic estimation of an item's value. If a player has jumped off a ledge, there's no reason for every bot to figure out when that player will hit the bottom. It's much faster to work it out once (the first time a bot needs to know) and if another bot needs to know, it can share the result.

These are the kinds of optimizations that can create massive speed improvements, making an otherwise difficult computation fast. If you think about it, it's not that different from one person researching the effects of some new medicine and then sharing the results with the entire world.

Monday, November 10, 2008

Social Mentality

I cannot help but comment on the results of the recent American presidential election, in which Barack Obama became the first non-white to be elected president. As a jaded American, I recognize that America has its share of both wonders and problems. But I have never been more proud of America than I was on this past election night. To me, the election of Barack Obama is symbolically a major victory in America's war against racism. And were Obama the Republican candidate and McCain the Democratic one, I would be no less overjoyed. Some things in life are more important than what percent of a candidate's positions we agree with.

I want to stress that this election is a victory over racism, not slavery. Most cultures in pre-modern times practiced slavery, but it was enslavement of people from the same ethnic background. Slavery in America and the Caribbean isles differed from other forms in slavery in that the belief was also coupled with a sentiment of racial superiority. So even after the abolition of slavery in 1863, the underlying tone of racism permeated much of American life. In contrast, the Britain empire outlawed slavery in 1833, but their enslavement wasn't particularly racially biased, so their past two centuries haven't been filled with racial tension. Abraham Lincoln won the war on slavery in 1865, but completely decimating the southern American states couldn't change the racist opinions that much of the country still retained. The election of Barack Obama to the office of President is proof that a large percent of America is now racially blind-- the ethnic background of a candidate is not a reason to select against them.

Of course, not all of America feels that way. That's why this election is a sign of major progress on the issue of racial discrimination, but it doesn't represent the end of it. Looking at the final electoral vote distribution, this election was heavily slanted towards Barack Obama but unanimous. For example, Alabama and Mississippi again voted for the conservative candidate as they've done for decades. But Virginia and North Carolina both voted for Barack Obama, two states that split away from America during the American Civil War because they wanted the right to keep slavery legal (among other things).

This might seem like a trite point, but each major population center in America has its own local way of thinking. People in Los Angeles are more liberal than people in Salt Lake City, for example. The southern states tend to be more conservative than New England states. And the way they vote is a reflection of their local societal beliefs. If this were not the case, every city and state would vote exactly the same way. There's nothing magical about the geography that makes people think in certain ways. The social mentality is a purely contained in the minds of all people in that local society, and if you think about it, that is an incredible thing.

For North Carolina to vote for a black man 147 years after it tried to secede from the United States, the entire cultural mentality had to change. It does not change quickly either. It wasn't until all the adults of that generation died, along their children and grandchildren, that the majority of the state decided that maybe blacks and whites were equals. That should be a sign of how easy it is to believe the first thing you're told and how hard it is to consider outside opinions.

More often than not, people belong to the political party and religion of their parents, quite apart from the actual merits of those positions. Over 90% of America is Christian and well under 10% of India is Christian, despite extensive missionary work. Even if there were strong, logical reasons to believe in the Christian religion, it's clear that those reasons are not why America is a Christian nation. If those reasons were so logically compelling, then India would be a Christian nation too.

Like it or not, if your political and religious views closely match your family's and city's views, the odds are high that you haven't thought very hard about them. That doesn't mean your beliefs are wrong, but if they are right, you probably don't know why they are. It means you accept the basic beliefs of society's "hive mind", and you've ceded some of your thinking.

While this can be dangerous, accepting society's beliefs without too many questions can be really helpful. I'm not advocating, "question everything", but, "question everything important". You might be wondering why that is, and what this has to do with Artificial Intelligence. Next week I'll explain what I mean.

Monday, July 28, 2008

Mission Impossible

One of the basic realities of life is that if something is easy, it's already been done. In the grand scheme of things, all unsolved problems are hard problems. While that might not have been true 300 years ago, the advancements of widespread education and information technology have made it easy to find potentially talented individuals and make sure everyone knows about their work. The result of that for almost every technological problem you can think of, either someone has solved it already or it's very difficult.

So what do you do when you need to do something that hasn't been done before and it turns out the problem is very hard? How do you break an impossible problem into something solvable? While there's no right or wrong way to go about this, there are three basic approaches you can use. Assume that the objective of a perfect solution is either practically or literally impossible, and you just want the best solution that's actually feasible.

#1: Improve on an existing base

The concept here is to focus on the small, incremental progress you can achieve rather than the large, difficult end objective. This is the approach I took when solving the general high level goal selection in BrainWorks. Bots must constantly decide what they are most interested in doing. Should they track down a target or look for more players? In capture the flag, should they defend their flag or get the opponent's? I confess I've added nothing revolutionary to this logic, and the correct choice is something even experienced human players have trouble making. Instead I just added a few pieces of additional logic to help the bots decide. BrainWorks bots will consider tracking down a target it's recently heard but not seen, for example. It's not algorithmically amazing but it's progress and that's what matters.

#2: Break the problem into pieces and only do some of them

I used this approach when designing the item pickup code. Roughly stated, the implemented algorithm is to estimate how many points the bot would get if it were to pick up different items and it selects the item that gives the most points. While the item selection is reasonably good, there's a lot of corners cut for the purpose of reducing complexity. How valuable is a personal teleporter anyway? BrainWorks just values it at 0.6 points and calls it a day, but clearly such an item is far more useful in some situations than others. And the estimations of where enemies are likely to be are based on proximity to items, not on actual geographic locations (like being in a specific room or a hallway). The estimates aren't remotely perfect, but they are still good enough for the purpose they serve: a rough estimate of what item the bot should pick up next. Designing this code wasn't even possible without ignoring all the truly hard parts of item pickup.

#3: Accept defeat and deal with the consequences

This is essentially The Attitude of the Knife. Sometimes things just don't work out and can't be fixed, and all you can do is accept that reality and handle the consequences as best you can. This was the approach I took for the navigation code in BrainWorks. As I've mentioned before, the navigation tables have huge issues on anything other than a flat, two dimensional map. While I'd love to solve this problem and I'm certain I could do it, I just didn't have enough time to tackle it. There were a few clearly bad navigation decisions (such as jumping between ledges) that I wrote code to manually fix, but those were essentially stopgap measures.

While it's not ideal, there's nothing wrong with accepting defeat against hard problems. After all, you aren't the only person in the world who couldn't solve the problem.

Monday, July 7, 2008

A Simple Solution

As the story ended last week, I had uncovered a very strange bug in BrainWorks. There was up to a 10% accuracy difference between the first bot added to the game and the last bot. No one guessed the correct cause, but this one from cyrri was the closest:
bots occupy client slots in the same order as they are added.
in the very rare cases of two clients killing a third one simultaneously, it is allways the one with the lower slot id that gets the frag, becuase his usercommands are processed first. the other one gets a missed shot.
This actually does happen, but it only accounts for a 1% to 2% accuracy change between the first and last bots. Also, this value increases as bot accuracy increases, since it's more likely that two bots will be lined up for a good shot at the same time.

The real culprit was the mechanism through which the server processes client commands. The server processes each human player's inputs as soon as it receives them (as fast as 3 milliseconds for a human), but the inputs for bots are processed exactly once every 50 milliseconds. In turn, each bot handles its attacks and then moves. Then the next bot is processed, and they do this in the order they were added to the game. See the problem?

Every bot made its decisions based on where the other bots were currently located at the end of the last server frame, but only the first bot will actually attack against targets in those positions. By the time the last bot aims, every target had already spent 50 milliseconds moving. The first bot had 0 ms of latency and the last bot had 50 ms. Now 50 milliseconds isn't too bad, except the last bot was playing as if its latency was 0. That's why it missed around 10% more.

Since one of my original project constraints was that nothing would change in the server code, that meant that at least some bots had to play with 50 milliseconds of latency. There were no changes I could make to reduce this problem. So the solution was to add latency to all the bots. I wrote a feature into the AI core that tracked the positions of all players in the game for the last half second worth of updates or so, for all bots and all humans. Then if a bot needed to know where a target was, it looked up the properly lagged position and did basic extrapolation to guess where the target would be at the exact moment of attack.

Implementing this system gave all the bots very similar accuracies (within 1% due to the issue cyrri pointed out). But now the problem was that all bots the same accuracy as the worst bot, when they should have had the accuracy of the best bot. It turned out this "basic extrapolation" wasn't good enough. The original Quake 3 AI code used linear trajectories to estimate where a target would end up, nothing more sophisticated than that. So if a bot aimed at a target running to the bottom of a staircase, the bot would keep aiming up into the ground for a while, even though a human would know the target would move straight.

I tried some slightly more advanced solutions, doing basic collision checking against walls, but that didn't solve the problem of running down a ledge. I eventually concluded that humans have a learned sense of physics they take into account, and the bots would need that same sense if they were to play like humans. Solving this problem was both straightforward and time consuming.

I made an exact duplicate of the entire physics engine, modified it for prediction, and placed it in the AI code.

Every detail needed to be modeled-- friction, gravity, climbing up and down ledges, movement into and out of water. Even the force of gravity acting on a player on an inclined ledge. Everything had to be duplicated so that the bots could get that extra 10% accuracy in a human-like manner. It was not easy, and I learned far more than I wanted to know about modeling physics in a 3D environment. But in the end, it was worth it to see bots that could aim like humans.

Monday, May 26, 2008

Little Lies

I read an exceptionally good essay by Paul Graham entitled, "Lies We Tell Kids". It talks about white lies, the kind you tell to protect people rather than manipulate them. It's worth reading in full, but let me summarize what I took out of it:

People usually lie because the listener cannot emotionally or mentally handle the truth. While this is often the best thing for the listener at present, it has a cost: They do not understand reality as it truly is. They are unprepared to handle that truth when they encounter it in another fashion.

As I wrote earlier, I see strong parallels between designing AI and being a parent. There's a lot to be said for the "lies" that we as AI designers tell our AI: "No really, you can have perfect information about the entire world. And if you have to spend a long time thinking, that's okay. The entire world will stop and wait for you to decide." Whenever we let our AI cheat at the game, look up information it shouldn't know, or otherwise do things a player can't do, we are in some sense lying to our AI about what reality is like. And the price we pay is this:

We live in fear that our AI will encounter a game situation where the player will know the AI has cheated.

I spend a lot of time talking about good AI and bad AI, and why it's often worth the time to write good AI. But I'm not going to say we shouldn't lie to our bots! I recognize that "bad AI" has a place in AI design. Sometimes a corner must be cut because a problem is simply too difficult. And often these are not isolated exceptions. It's possible to write enjoyable AI that cuts an awful lot of corners as long as you know the right corners to cut.

For example, a bot can get away with almost anything if the player can't see the bot. The amount of cheating a bot can get away with dramatically decreases once the player actually watches the bot, in the same way that a magician's tricks only work when you watch the hand he wands you to watch.

The real key to writing good AI is learning which corners to cut and which problems must be explicitly solved in a human-like manner. In other words, it's about determining when to write bad AI and when to write good AI. In BrainWorks, I certainly erred on the side of too much good AI, but I think this is still a long term benefit. Once someone has solved the problem (and released the source code), everyone can reap the benefits and we can all move onto harder problems. But actually writing artificial intelligence that doesn't cheat about anything is an excruciatingly difficult problem. An AI that does everything like a human does would appear to observers to be a human. It could pass a Turing Test, the holy grail of Artificial Intelligence research.

When I say "Holy Grail", I mean it in more ways than one. Even the most optimistic estimates on when we'll be able to create a truly artificial mind say it won't be done for decades. My personal estimates are that it will be done in 300 to 1000 years. And some people claim that it is literally impossible to create an artificial mind, that it so complex that it can only be evolved or created by God (depending on your philosophical preference).

If you want an example of what roughly 700 years of research will give you, the atomic bomb was created only 500 years after the birth of Leonardo da Vinci. Think about how much science advanced in those 500 years. While that time was centuries long, it was punctuated by thousands of little scientific discoveries about how the universe worked. Einstein couldn't have had his flash of insight about how Newton was slightly wrong if Newton hadn't done work to come up with equations that were 99.9999% right.

I believe the next 700 years of AI Research (give or take a little) will be similarly punctuated. Humanity's journey to the creation of an artificial mind will be punctuated by little advancements. No one person could do all the work necessary to design everything needed for a true artificial mind, but every time an AI designer chooses to do a bit of "good AI" rather than cutting corners with "bad AI", we get one step closer to the final goal, even though the destination is light years away. I will not fault anyone for writing AI that cheats; I've done the same. But writing AI that cheats even a little less than usual is worthy of high praise.

Monday, May 12, 2008

The Emperor's New God

Last week, I ended by explaining how I came to realize I didn't have a relationship with God. In short, I empirically tested whether or not God was speaking his words into my mind and I concluded that at least some of the time, things that "felt" like they were from God were not. Specific things I thought God had promised me didn't happen, so that must not have been from God. Moreover, even simple thoughts that were no more specific than a horoscope were also no more accurate. In short, I realized I had no evidence to conclude I had ever communicated with a higher power. And lacking a relationship with God, I realized I was not a Christian.

That is not the same thing as giving up my faith.

Indeed, for a few months I desperately wanted "back in". There is a categorical difference between not having a relationship with God and being unable to have that relationship. Even to this day, if it turns out God exists, then I really want a relationship with that God. Who wouldn't want to be friends with the creator of the universe?

So having lost the evidence supporting my religious faith, I started from first principles, to see if the tenets of Christianity logically made sense. In other words, I wanted to logically test whether Christianity was truth but modern Christians had pursued the faith in the wrong manner, or if the religion just didn't make sense and I should give up on it altogether.

For those of you who aren't that familiar with Christianity, here is the core doctrine:
God loves people, but God is perfect and righteous, and therefore cannot allow sin to exist in his presence. Because all humans are innately sinful and God loves them in spite of this, God devised a plan that would allow him to have a close relationship with humans on an individual level. God sent Jesus, who is himself fully God and fully human, to live a perfect, sinless life and then become a penitence sacrifice for all of humanity. Jesus suffered the penalty of death instead of me so that I could live and have a relationship with God. And Jesus.
When I broke the dogma down to this description, there were three potential logical issues that I spotted. If any one of these things is false, the religion doesn't make logical sense. Here are the issues:
  • Why can't God let sin exist in his presence?
  • Why is the penalty for sin death?
  • Why is someone else dieing for my sin considered justice?
According to the Bible, Jesus spent a lot of time with prostitutes, drunkards, and other "sinners", expressly because these were the people who needed ministering the most. If Jesus is fully God, why wasn't Jesus bothered being around sin? That sounds like a strong logical contradiction.

I don't have a problem accepting that everyone is a "sinner", meaning that no one lives a perfectly righteous life, never mistreating or hurting other humans, or taking advantage of them. But why is it fair that living unrighteously should be met by death? Jesus himself talks about how some sins are worthy of greater punishments than others, and a spiritual death in hell is clearly the worst punishment someone could earn. By and large, most people never do anything so disruptive to society that death is a just punishment for them. It doesn't make logical sense that all kinds of unrighteousness should be worthy of the same death sentence.

Last, there's the question of why, if I'm guilty and deserving a death, it's fair for someone else to die on my behalf. This kind of thing goes against the basic philosophy of penal systems. If I'm sentenced to serve 10 years in prison, the courts don't consider it fair if someone else chooses to serve 10 years instead, or even 30 years. That option is simply not allowed because it doesn't even relate to the issue at hand. The Bible supports this opinion as well, according to Ezekiel 18:20:
The soul that sins shall die. The son shall not bear the iniquity of the father, neither shall the father bear the iniquity of the son.
Historically there have been some penal systems that allowed, by consent, some people to receive the punishment intended for others. The result has always been that rich people pay poor people to serve sentences for them, and in most cases the systems were changed to disallow this abuse. So if I really am worthy of death for my sins, I don't logically understand why it would be okay for Jesus to die instead of me. That simply doesn't make sense.

I sent this list of questions four Christians whose opinions I respected. All of them were pastors at the time or had been in the past, and two of them had served time as missionaries in the third world as well. Of them, one actually tried to answer the questions. Two said they would think about it and get back to me, which never happened. The last didn't even respond. And all the answers I received boiled down to, "God knows better than you. It makes sense to God and that's what matters."

I'm sorry, but that's not a strong enough argument for me to base my life around a religion. The overall silence spoke louder than words and confirmed what I had suspected all along: the Christian religion is fundamentally flawed. A God might exist, and I certainly hope he does. But he is not the Christian God. A sentient being that could create the beautiful physics behind our universe can create a religion that doesn't have glaring logical holes in the core of its doctrine.

Now all that said, I'm still a big supporter of many Christian ideals as they relate to society. "Love your neighbor as yourself" is a good general rule for life, and creates a society that's better for everyone as a whole. Just because I've given up religion doesn't mean I've given up ethics.

At any rate, thanks for taking the time to hear my story. Next week I promise I'll be back with more AI themed discussion.

Monday, April 28, 2008

In the Cleft of the Rock

As I mentioned earlier, I was a devout Christian when I started programming BrainWorks, and now that I've finished, I am an agnostic with no particular religious leaning. My work in artificial intelligence was not the only reason I gave up my religious faith, but it is part of the reason. Moreover, it is a story worth telling, and I hope it will help both Christians and non-Christians gain a greater understanding of each other, something that is sorely lacking in this world where rational justification often takes a back seat to dogmatic conclusions.

To understand the process of giving up my faith, however, you must understand the faith I had. Many people who call themselves Christians, perhaps most, don't have much to do with the actual tenets of the religion. To the average Christian, being a Christian means that God and Jesus love you, and Jesus died for your sins so you that when you die, you go to Heaven instead of Hell. In other words, Christianity is a nice feeling in your heart and your insurance policy for when you die. It has little impact on your actual life, except for a general imperative to "do good things", which many Christians ignore, or act as if it only applies to other Christians.

I was never this kind of Christian.

My core belief has always been in causality, and by extension rationality. I have always believed that if something is true, you can trust in its consequences to be true as well. So I believed in the core tenet of Christianity, which is:
Jesus died to pay for my sins and rose from the dead so that I could receive God's spirit and thereby have a loving, personal relationship with the God in this life, and be with God in heaven after I die.
And I believed everything that logically follows if this is true. So yes, I believed that God can and does do miracles, even in this day and age. Real miracles too, not "I saw the face of Jesus in my pop tart". Stuff like "I was born blind and now I can see". I believed that people can and did hear from God, and that if you pray to God, he hears you as well.

Note that this is fundamentally different from saying that the Bible is the 100% true, infallible word of God. There are some clear logical inconsistencies, and I just wrote that off as "I guess they didn't hear correctly from God about that part", or "someone corrupted this text for their own purposes". The most glaring example is the book of Daniel, which purports to be written during the Babylonian captivity. However, the book contains words whose linguistic roots trace back to Persian empire, a culture the Israelites didn't encounter until well after Daniel died.

There were also some religious statements that don't logically make sense. The traditional explanation for these things, such as "don't eat pork", is that they apply to older cultures for some reason but don't apply to the culture of today. However, I took the stance that maybe people just didn't hear correctly from God and the Bible's authors recorded the wrong thing.

For example, even as a Christian I did not think homosexuality was immoral. There is no logical reason that consenting sex between people of the same gender would be wrong but between opposite genders would be okay. "God hates it for some reason" isn't a good argument. According to the book of Jeremiah (Jeremiah 7:19), things that are wrong because they are detrimental to humans. There is certainly nothing humans can do to injure God! Our actions can only harm humans, so if an action does not hurt anyone, it cannot be immoral. Consensual sex falls into this category regardless of gender.

So as a Christian I had no problem admitting that the Bible flawed and even wrong about some things. I disagreed with some very popular Christian opinions, and not just regarding homosexuality. But I still considered myself a Christian, because you can still believe the Bible is wrong about some things (bacon, homosexuality) and right about other things (Jesus died so God can have a relationship with me). I still believed that I could hear from God, talk to God, and witness the supernatural miracles of God.

But around one year ago everything changed, for two very different reasons. One reason convinced the pragmatist in me and the other convinced the idealist, and the two arguments together made me undeniably admit that I had been very, very wrong for the past 30 years of my life.

Please believe me when I say that I was not looking for a reason to leave my religious faith. As a Christian, I felt totally in love with God. To this day, I have never felt more joy in my life than during a Sunday worship service, singing hymns and praises. Being confronted with the irrefutable reality that I was wrong about God was agonizing and heart wrenching, and met with many tears. For months I felt devastated, and I still wonder if I will ever find something that brings me as much joy. But I had no other choice. I do not worship God; I worship truth, and by extension reality. If the truth is that there is no God, or at least no Christian God, then my only option is to act on that truth.

I am confident that most atheists and agnostics do not have the faintest idea how much safety and security a devout Christian gets from their religion. From the agnostic point of view, it's easy to think of Christians as weak minded for taking so much on faith and so little on reason. If a Christian doesn't change their mind when presented with solid reason, the conclusion seems natural. But it's difficult to comprehend the amount of mental and emotional security that even a religion brings, even a false one. The sheer fear of being wrong is enough to make most religious people flat out ignore solid arguments to the contrary.

I'm glad I took the red pill, but the blue pill would undoubtedly have been less painful. It is my hope that godless people would have compassion on those who still have religion, and understand that even though many may fear they are wrong, they lack the emotional strength to take the large steps that seem easy for us.

Monday, March 31, 2008

Parenting

Writing artificial intelligence is a lot like being a parent. It requires an unbelievable amount of work. There are utterly frustrating times where your children (or bots) do completely stupid things and you just can't figure out what they were thinking. And there are other times they act brilliantly, and all the effort feels satisfying and well spent.

Fascinatingly enough, people rarely ask the question of whether being a parent is worth the effort. There's an implicit belief that once you grow up and get married, you should have children. It's like the option of not having children doesn't exist. And of course once you do have children, you must do everything in your power to raise them as well as you can. Questions of whether becoming a parent is good idea, or how much time should be invested in children rather than your spouse are discouraged if they are even asked.

Now I'm not suggesting that parents shouldn't spend a lot of time parenting. Once you've committed the next 18 years of your life by having a child, it seems like you should live up to the responsibility you create for yourself. I'm just wary of people claiming there's only one answer when they aren't even asking the question. Doing the right thing for the wrong reason might cause you to do wrong thing later, as situations change. A good example from parenting would be an overt amount of hand-holding once your child goes off to college. If you always believe you should give 100% to help your kids no matter what, you might end up doing their laundry and cleaning their room when they turn 25. Certainly the purpose of parenting is to turn children into adults, and at some point good parenting involves letting your child being an adult. So the core question of, "Is it worth the trouble to have children?" is a real question, and the answer isn't always yes, although many people assume it is.

What seems strange to me is how people answer the related question, "Is it worth the trouble to program good AI?" And they almost universally say, "No". There are countless games where the AI overtly cheats to win-- pretty much every real time strategy game. Almost every first person shooter to date has had massive problems navigating around a level, Quake 3 included. Most squad-based AI is exploitable by any player who has encountered it a reasonable number of times. And the solution of heavier scripting might make AI seem more realistic when first encountered, but it's far less realistic every other time. Companies resoundingly see little financial benefit in creating genuinely good AI.

I find these answers very much at odds with each other. Writing good AI requires more intelligence than raising a child well, but certainly less time. Moreover, good scientific research results are never lost. They can built upon for centuries. They still teach Newtonian mechanics in colleges, even though we know Newton was technically incorrect (but close enough for most practical purposes). There is a lot of potential long term value in designing good AI, even if the short term profits aren't there. Similarly, the cost of raising a child these days is easily over $100,000. That's a huge short term investment that, quite frankly, never pays off for a lot of children in long term societal benefit.

The real conclusion is that economic and scientific valuation have almost nothing to do with the actual choices people make. When someone says, "Is it worth it to raise children?" or a company asks "Is it worth it to make good AI?" they are using different definitions of worth. The company wants to make money but the potential parent wants to enjoy life.

People don't do what makes them the most money. People do what they love!

Why are there over six billion people in this world? Because people are genetically programmed to love having children. The existence of hormones that encode these feelings of love doesn't make the love any less genuine. Children require effort, but the love for children overrides the dislike of effort (for most people, at least). Without the love of children coded into the DNA of humanity, good parents might become as rare as AI programmers. There's not a lot of people who love designing AI.

Of all the things I have learned designing AI, the most important is this:

Find what you love doing, then do it.

That said, there are surely people who would love life more if they didn't have children. And there are companies that make money by writing good AI. I would love to see more people enjoying life, even if that means going against societal trends and not having children. I would also love to see more companies making money by writing good AI. Not every company can or should do that, but I believe there's room for advancement in both areas.

Monday, March 3, 2008

Good Enough For Government Work

This might be a surprise if you've read my post on pragmatism, The Attitude of the Knife, but I'm actually an idealist. I don't want things to be good enough; I want them to be perfect, or at least as perfect as pragmatically possible. I use the attitude of the knife to temper my idealism.

Of course, there is tension between idealism and pragmatism when encountering very difficult problems. I don't just mean, "you'll have to be really smart to solve this problem." I mean, "it's mathematically impossible to solve this problem using the computer you have access to." The idealist wants 100% and the pragmatist wants 99% or even just 90%.

The classic example of a mathematically difficult problem is the Traveling Salesman Problem. If you're not familiar with the problem, it works like this: Suppose you have a salesman who wants to visit every city in a certain area exactly once, and he knows the distance between each pair of cities. What's the minimum distance he'll have to travel to visit all the cities and how do you determine it?

Mathematicians have been studying this problem for almost 200 years now. It turns out that determining the perfect solution is really, really hard, but it's possible. Worst case you can just check every possible ordering of cities. That worst case is unbelievably bad, however. If the salesman had 30 cities to visit and a computer could test one trillion options per second (current no single computer exists that is this fast), testing all these options would take much longer than the current age of the universe. Most of the studying mathematicians have done on this problem revolves around ways to get a "pretty good" answer without spending billions of years to do so.

I say I'm an idealist, which means I want things to be perfect. But I'm also ruthlessly pragmatic. These contrasting ideals intersect on the definition of perfect. If it takes 1 trillion years to find the traveling salesman route that takes 38 days but 1 hour to find the route that takes 42 days, then the 38 day solution isn't perfect. It's really 1 trillion years plus 38 days, which is definitively worse than 42 days, 1 hour. A "perfect" solution must take into account the constraints surrounding the problem and not just the problem itself. The mathematically optimal solution is not the perfect solution expressly because it's not practical enough to use.

What does this have to do with Artificial Intelligence, and specifically item pickup? Deciding what items to pickup is very similar to solving the traveling salesman problem, and that's bad news for me as the AI designer! If there are 20 items on a level, a bot wants to pickup between 0 and 20 different items on the way to its final destination. That's similar to a traveling salesman wanting to visit exactly 20 different cities. And since BrainWorks bots can't spend several trillion years to decide what items they're going to pickup, their item pickup code cuts a lot of corners to get a pretty good solution in a reasonable amount of time. This isn't the mathematically optimal solution, but it is a better choice overall.

Here are some of the tricks BrainWorks uses to reduce the computation time:
  • Nearby items are grouped into a single cluster. Bots consider picking up the entire cluster at once. This reduces the effective number of things to consider.
  • Bots do not consider picking up more than four items at once before going to their final goal.
  • Bots only consider picking up the dozen or so items that are relatively near their current position, or not far off the path to their destination.
  • When the bot is very near an item (less than a second away), it automatically picks it up rather than doing a full computation.
These changes have a profound effect on the size of the search. Suppose a level with 40 items gets bound into 25 clusters and the bots never consider more than 4 total pickups from the 12 nearest items. Each item pickup decision require testing at most 800 options, rather than more possibilities than there are people on planet Earth.

It's rare that thinking 10 item pickups ahead would really help the bot more than the first two or three choices. And usually the bot doesn't want to travel halfway across the map to pickup some random item. Only the nearby items are worth considering. For the one or two items that really might be that good, the bot specifically notes these items and includes them in the list of possible pickups, no matter how far away they might be. And as it turns out, these results are still "very good". Often the theoretically best solution isn't the best for the problem at hand. If you idealize the results and not the method, being idealistic involves selecting the most pragmatic solution.