X
Innovation

Artificial intelligence in the real world: What can it actually do?

What are the limits of AI? And how do you go from managing data points to injecting AI in the enterprise?
Written by George Anadiotis, Contributor

How the cloud enables the AI revolution

AI is mainstream these days. The attention it gets and the feelings it provokes cover the whole gamut: from hands-on technical to business, from social science to pop culture, and from pragmatism to awe and bewilderment. Data and analytics are a prerequisite and an enabler for AI, and the boundaries between the two are getting increasingly blurred.

Many people and organizations from different backgrounds and with different goals are exploring these boundaries, and we've had the chance to converse with a couple of prominent figures in analytics and AI who share their insights.

"Deep stupidity"

Professor Mark Bishop is a lot of things: an academic with numerous publications on AI, the director of TCIDA (Tungsten Centre for Intelligent Data Analytics), and a thinker with his own view on why there are impenetrable barriers between deep minds and real minds.

Bishop recently presented on this topic in GOTO Berlin. His talk, intriguingly titled "Deep stupidity - what deep Neural Networks can and cannot do," was featured in the Future of IT track and attracted widespread interest.

In short, Bishop argues that AI cannot become sentient, because computers don't understand semantics, lack mathematical insight, and cannot experience phenomenal sensation -- based on his own "Dancing with Pixies" reductium.

Bishop however is not some far-out academic with no connection to the real world. He does, when prompted, tend to refer to epistemology and ontology at a rate that far surpasses that of the average person. But he is also among the world's leading deep learning experts, having being deeply involved in neural networks before it was cool.

"I was practically mocked when I announced this was going to be my thesis topic, and going from that to seeing it in mainstream news is quite the distance," he notes.

His expertise has earned him more than recognition and a pet topic, however. It has also gotten him involved in a number of data-centric initiatives with some of the world's leading enterprises. Bishop, about to wrap up his current engagement with Tungsten as TCIDA director, notes that going from academic research and up in the sky discussions to real-world problems is quite the distance as well.

"My team and myself were hired to work with Tungsten to add more intelligence in their SaaS offering. The idea was that our expertise would help get the most out of data collected from Tungsten's invoicing solution. We would help them with transaction analysis, fraud detection, customer churn, and all sorts of advanced applications.

But we were dumbfounded to realize there was an array of real-world problems we had to address before embarking on such endeavors, like matching addresses. We never bothered with such things before -- it's mundane, somebody must have addressed the address issue already, right? Well, no. It's actually a thorny issue that was not solved, so we had to address it."

Injecting AI into the enterprise

download.png

Injecting AI into enterprise software is a promising way to move forward, but beware of the mundane before tackling the advanced

Steven Hillion, on the other hand, comes at this from a different angle. With a PhD in mathematics from Berkeley, he does not lack relevant academic background. But Hillion made the turn to industry a long time ago, driven by the desire to apply his knowledge to solve real-world problems. Having previously served as VP of analytics for Greenplum, Hillion co-founded Alpine Data, and now serves as its CPO.

Hillion believes that we're currently in the "first generation" of enterprise AI: tools that, while absolutely helpful, are pretty mundane when it comes to the potential of AI. A few organizations have already moved to the second generation, which consists of a mix of tools and platforms that can operationalize data science -- e.g. custom solutions like Morgan Stanley's 3D Insights Platform or off the shelf solutions such as Salesforce's Einstein.

In many fields, employees (or their bosses) determine the set of tasks to focus on each day. They log into an app, go through a checklist, generate a BI report, etc. In contrast, AI could use existing operational data to automatically serve up the highest priority (or most relevant, or most profitable) tasks that a specific employee needs to focus on that day, and deliver those tasks directly within the relevant application.

"Success will be found in making AI pervasive across apps and operations and in its ability to affect people's work behavior to achieve larger business objectives. And, it's a future which is closer than many people realize. This is exactly what we have been doing with a number of our clients, gradually injecting AI-powered features into the everyday workflow of users and making them more productive.

Of course, this isn't easy. And in fact, the difficult aspect of getting value out of AI is as much in solving the more mundane issues, like security or data provisioning or address matching, as it is in working with complex algorithms."

Know thy data -- and algorithms

Artificial Intelligence Legal, ethical, and policy issues

Before handing over to AI overlords, it may help to actually understand how AI works

So, do androids dream of electric sheep, and does it matter for your organization? Although no definitive answers exist at this point, it is safe to say that both Bishop and Hillion seem to think this is not exactly the first thing we should be worried about. Data and algorithmic transparency on the other hand may be.

Case in point -- Google's presentation on deep learning preceding Bishop's one on GOTO. The presentation, aptly titled "Tensorflow and deep learning, without a PhD," did deliver what it promised. It was a step-by-step, hands-on tutorial on how to use Tensorflow, Google's open source toolkit for deep learning, given by Robert Kubis, senior developer advocate for the Google Cloud Platform.

Expectedly, it was a full house. Unexpectedly, that changed dramatically as the talk progressed: by the end, the room was half empty, and a lukewarm applause greeted off Kubis. Bishop's talk, by contrast, started with what seemed like a full house, and ended proving there could actually be more people packed in the room, with a roaring applause and an entourage for Bishop.

There is an array of possible explanations for this. Perhaps Bishop's delivery style was more appealing than Kubis' -- videos of AI-generated art and Bladerunner references make for a lighter talk than a recipe-style "do A then B" tutorial.

Perhaps up in the sky discussions are more appealing than hands-on guides for yet another framework -- even if that happens to be Google's open source implementation of the technology that is supposed to change everything.

Or maybe the techies that attended GOTO just don't get Tensorflow -- with or without a PhD. In all likelihood, very few people in Kubis' audience could really connect with the recipe-like instructions delivered and understand why they were supposed to take the steps described, or how the algorithm actually works.

And they are not the only ones. Romeo Kienzler, chief data scientist at IBM Watson IoT, admitted in a recent AI Meetup discussion: "We know deep learning works, and it works well, but we don't exactly understand why or how." The million dollar question is -- does it matter?

After all, one could argue, not all developers need to know or care about the intrinsic details of QSort or Bubble Sort to use a sort function in their APIs -- they just need to know how to call it and trust it works. Of course, they can always dig into commonly used sort algorithms, dissect them, replay and reconstruct them, thus building trust in the process.

Deep learning and machine learning on the other hand are a somewhat different beast. Their complexity and their way of digressing from conventional procedural algorithmic wisdom make them hard to approach. Coupled with vast amounts of data, this makes for opaque systems, and adding poor data quality to the mix only aggravates the issue.

It's still early days for mainstream AI, but dealing with opaqueness may prove key to its adoption.

Editorial standards