X
Innovation

​Obama's report on the future of artificial intelligence: The main takeaways

We read the Obama administration's report on artificial intelligence in full. Here are the main themes to ponder.
Written by Larry Dignan, Contributor

The Obama administration released a report on the future of artificial intelligence and addressed everything including job loss, ethics, bias, and positive outcomes for multiple industries.

There's a lot to digest in the full report, which has been noted in multiple places. I pulled out a few key talking points to ponder as AI advances.

AI can be a solution to the workforce displacement it'll create. The report noted that AI can help workers progress and ultimately transition to a big data era.

DARPA, intending to reduce from years to months the time required for new Navy recruits to become experts in technical skills, now sponsors the development of a digital tutor that uses AI to model the interaction between an expert and a novice. An evaluation of the digital tutor program concluded that Navy recruits using the digital tutor to become IT systems administrators frequently outperform Navy experts with 7-10 years of experience in both written tests of knowledge and real-world problem solving. Preliminary evidence based on digital tutor pilot projects also suggests that workers who have completed a training program that uses the digital tutor are more likely to get a high-tech job...

Currently, the cost of developing digital tutors is high, and there is no repeatable methodology for developing effective digital tutors. Research that enables the emergence of an industry that uses AI approaches such as digital tutors could potentially help workers acquire in demand skills.

How will we know when AI comes of age? The report noted that it's difficult to gauge the milestones of AI as it makes leaps toward general intelligence. These milestones would include success at broad unstructured tasks, unification of different AI methods, and solving specific technical challenges.

At present, the majority of basic research in AI is conducted by academics and by commercial labs that regularly announce their findings and publish them in the research literature. If competition drives commercial labs towards increased secrecy, monitoring of progress may become more difficult, and public concern may increase.

AI bias. The report noted that computer science is dominated by white males and there's a need for more diversity. If not, AI will carry the biases of the creators of the algorithms.

Commenters focused on the importance of AI being produced by and for diverse populations. Doing so helps to avoid the negative consequences of narrowly focused AI development, including the risk of biases in developing algorithms, by taking advantage of a broader spectrum of experience, backgrounds, and opinions. These topics were also covered extensively during the public workshops. There is some research on the effects of a lack of diversity in the AI workforce on AI technology design and on the societal impacts of AI. This rich body of research is growing but still lagging behind the literature on broader AI workforce development needs. More research would be beneficial.

The report continues a long passage on AI bias and added:

The difficulty of understanding machine learning results is at odds with the common misconception that complex algorithms always do what their designers choose to have them do, and therefore that bias will creep into an algorithm if and only if its developers themselves suffer from conscious or unconscious bias. It is certainly true that a technology developer who wants to produce a biased algorithm can do so, and that unconscious bias may cause practitioners to apply insufficient effort to preventing bias. In practice, however, unbiased developers with the best intentions can inadvertently produce systems with biased results, because even the developers of an AI system may not understand it well enough to prevent unintended outcomes.

AI will need ethics. Here's a notable recommendation: schools and universities should include ethics and related topics in security, privacy, and safety as an integral part of curricula on AI, machine learning, computer science, and data science.

The cybersecurity conundrum with AI. The Feds obviously see AI as a key technology for cybersecurity. AI can automate a lot of security tasks.

Currently, designing and operating secure systems requires a large investment of time and attention from experts. Automating this expert work, partially or entirely, may enable strong security across a much broader range of systems and applications at dramatically lower cost, and may increase the agility of cyber defenses. Using AI may help maintain the rapid response required to detect and react to the landscape of ever evolving cyber threats. There are many opportunities for AI and specifically machine learning systems to help cope with the sheer complexity of cyberspace and support effective human decision making in response to cyberattacks.

But what about AI security? From the report: "AI systems also have their own cybersecurity needs. AI-driven applications should implement sound cybersecurity controls to ensure integrity of data and functionality, protect privacy and confidentiality, and maintain availability."

Now, wouldn't you just hack the AI system in the future?

Editorial standards