Stanford makes a startling new discovery. Ethics

The president of the university where so many tech titans started, says that he wishes someone had thought about teaching them ethics, oh, a decade ago.

stanford-istock.jpg

Hoover Tower at Stanford University. Uladzik Kryhin, Getty Images

A childish name and an equally immature slogan used to be enough.

Google had "don't be evil."

Yahoo just had an exclamation point, as if you were supposed to be joyously surprised by its offerings every day.

Also: Google erases 'Don't be evil' from code of conduct after 18 years

Facebook had "making the world more open and connected."

Most of the Valley peddled "making the world a better place."

Somehow, though, little regard was paid to whether they were actually doing that.

Now, in a glorious moment of chest-beating and head-bobbing, Stanford University president Marc Tessier-Lavigne has admitted that his university -- which spawned so many young, great tech titans, such as the founders of Google, Instagram and LinkedIn -- failed to make titanic efforts in the area of ethics.

In an interview with the Financial Times, he revealed that the university now intends to explore the teaching of "ethics, society and technology."

As we survey the political and social carnage that seems to have been enabled by technology over the last few years, it's remarkable that this wasn't thought of before.

It's not as if there weren't the occasional concerned mutterings when, say, in 2010 Google was shown to have snooped on people's Wi-Fi.

Nor, in the same year, when Facebook CEO Mark Zuckerberg cheerily declared that people really weren't interested in privacy anymore.

The answer from tech companies was always a version of the same: "Oh, we're sorry. But trust us, it won't happen again."

There was always the suspicion that what they were really thinking was: "Sigh, this is so boring. Look, we're smarter than you. Can't you just let us get on with changing the world like the brilliant engineers we are?"

Of course, they did change the world. They moved so fast and broke so many fundamental things that society's pillars -- the law, for example -- simply couldn't keep up.

Now, like children who have just broken a toy with their Ritalin-free enthusiasm, they look down and at least begin to see that what they wrought was a touch fraught.

It may be that, as generations change, new Stanford students will emerge with their hearts in the right place. Or at least locatable.

It's refreshing, too, that Stanford is being open about its own possible role in unleashing uncontrolled engineering-based entrepreneurship.

Indeed, Tessier-Lavigne offered the FT these touching words: "Maybe some forethought seven to 10 years ago would have been helpful."

Maybe.

But if it took so little time to break the world, how long might it take to put it back together again?

Artificial intelligence should make it easier, right? Of course it will.

Related

AI 'more dangerous than nukes': Elon Musk still firm on regulatory oversight

The man building a spaceship to send people to Mars has used his South by Southwest appearance to reaffirm his belief that the danger of artificial intelligence is much greater than the danger of nuclear warheads.

AI shouldn't be held back by scaremongering: Michael Dell

Artificial intelligence and machine learning are just tools that can be wielded for good and bad, the Dell Technologies CEO has said.

Google employee protest: Now Google backs off Pentagon drone AI project

Google won't bid to renew its Project Maven contract with the Pentagon after it expires in 2019.

AI explained: Everything you need to know about Artificial Intelligence

An executive guide to artificial intelligence, from machine learning and general AI to neural networks.