Why businesses will have to audit algorithms, AI and account for risk

ZDNet caught up with Wharton professor Kartik Hosanagar to talk about his book, A Human's Guide to Machine Intelligence, and the growing pains of AI.

Algorithms will need transparency governance to avoid unintended consequences and risk
a-humans-guide-to-machine-intelligence.png

The proliferation of algorithms has a bevy of unintended consequences as well as a healthy dose of opportunity. The problem is that businesses aren't prepped for the business risk and the implications algorithms have.

In his book, A Human's Guide to Machine IntelligenceWharton professor Kartik Hosanagar makes a case for more transparency, procedures to audit algorithms and even regulation.

I caught up with Hosanagar to talk about his book, examples of how algorithms have gone bad and the risks and rewards ahead for AI. I found Hosanagar's book insightful and easily digestible. 

Primers: What is AI? | What is machine learning? | What is deep learning? | What is artificial general intelligence? 

Takeaways include:

  • Algorithms are already proliferating. 
  • What Microsoft learned from its AI bot experiments in China and U.S. 
  • The need for algorithm auditing. 
  • Business and social risks with algorithms.
  • The roles of government, industry and enterprises with regulating algorithms. 

MORE ON AI:

EBOOKS: