X
Business

Report from Foo Camp -- goodness!

I spent the weekend at Science Foo Camp, cosponsored by O'Reilly and Google.  Apart from the excellent food (of course!
Written by Esther Dyson, Contributor

I spent the weekend at Science Foo Camp, cosponsored by O'Reilly and Google.  Apart from the excellent food (of course!), there was also some excellent discussion in this mostly user-generated event.  I personally generated a session on ethics; the users generating the content included Stewart Brand, Sol Katz, Mitchell Baker, Sunil Paul and my brother George Dyson.   (It was a small session, generally the best kind.)

Here's a non-attributed rundown of what we discussed, which mostly ended up in questions.

First off, Stewart Brand asserted that morality and ethics are orthogonal: With complete transparency, in theory, we could also stop most criminal/terrorist behavior. Morality is personal and (for some) religious. You can't argue (effectively) about morality.  By contrast, ethics is public and negotiable; it's rules by which a society lives.  That's a nice distinction, but of course it simply avoids all the trouble that occurs when some people want their different private moralities to be part of public ethics.  And most public ethics are at least partially based on some least-common-denominator morality.  Be that as it may, it's a useful distinction.

What intrigues me is the impact of information/transparency on our inclination for "goodness" (your definition here).  IT/communications technology has had an impact on ethics not just in the "we have made the [digital] bomb" way one might think of at first, but also in making ethical dilemmas sharper and more omnipresent.  We have more knowledge, which makes us face more choices…and more responsibility.

That is, in the past, we knew less.  We didn't know much about the likely consequences of our actions, so we could just choose what feels good.  And we had little knowledge of all the evil around us in the world.  We couldn't feel responsibility for genocide if we didn't know about it.

Now, however, IT and communications have given us the ability to predict outcomes - and statistically, to assess what would have happened had we behaved differently.  If you can test for a disease that could be cured with early treatment, you have an obligation to test for the disease - and then to treat those who have it.

With complete transparency, in theory, we could also stop most criminal/terrorist behavior.  (Time out for various rants, and a resigned consideration of the need for two-way surveillance: If the government watches us, we should at least be able to watch the government back. But at what cost?)

But at the same time, does transparency about "normal" bad behavior - the sorts of things you see on tv and that people no longer hide, ranging from greed and crudeness to drivers swearing at pedestrians, littering and the like, casual dress - encourage it?  Will the searches now visible courtesy of AOL encourage not just the timid to be less timid, but the cheaters to cheat and the liars to lie?  Does increased exposure increase our tolerance, both for individuality and for things perhaps better not tolerated?

And more interestingly, at the bottom, does increased transparency about evil or badness (or whatever you want to call it) make us less likely to act ethically or generously than we might have been?

This brings to mind the "paradox of [too much] choice," ably described by Swarthnore psychology professor Barry Schwartz in his book The Paradox of Choice.  Much reduced, the book says that not only are we confused by too much choice, but that we take less pleasure in the choices we do make because we regret the other valid (perhaps even superior in hindsight) choices we did not make.  We feel responsible for our dissatisfaction because we had choice; we cannot blame it on circumstance. The antidote, says Schwartz, is to be a satisficer, not a maximizer.  As for institutions, they should offer "libertarian paternalism":  Suggest a default, but allow for other options.

So apply that to bad things:  If you know about genocide in Rwanda, and little else, you may be moved to act.  But if you know about genocide in Rwanda plus AIDS in Tanzania and government-fostered starvation in North Korea, torture in US and Chinese prisons, homelessness in Chicago and bribery of politicians in California, you're likely to throw up your hands in despair.  And that's not just because you can't pick which evil to tackle first, but because the presence of each evil lowers the emotional impact of the others.  If you fix poverty in Gainesville, poverty in Mississippi will still gnaw at your conscience.

What to do in response?  Perhaps what Amazon does for the paradox of too much good choice: a recommendation engine.  Amazon lets you choose almost anything (or any book) you want. But it helps you with recommendations so you don't feel overwhelmed - and perhaps so you're not confronted with all the choices you didn't make. 

And now, here is something that (embarrassingly) I didn't think of in Mountain View, but there is indeed such a recommendation engine, still in stealth,and I am an investor in the company behind it. I just hadn't figured out how to describe it until now.  (Of course, just like Amazon, it doesn't just recommend; it helps you understand your choices as well.  And unlike Amazon, it will help you track their consequences.) But enough for now; more on Important Gifts in due course.

Editorial standards