AI and the enterprise: The view from Noodle.ai

Stephen Pratt, CEO of Noodle.ai, explains how his San Francisco-based startup uses 'AI as a service' to help companies make complex business decisions.
Written by Charles McLellan, Senior Editor

Video: AI is already creating jobs and boosting business

Noodle.ai was founded in March 2016 by Stephen Pratt, who has an impressive track record in the tech industry: before launching the San Francisco-based AI startup he oversaw all worldwide Watson implementations for IBM Global Business Services; prior to that he was the founder and CEO of Infosys Consulting, a senior partner at Deloitte Consulting, and a technology and strategy consultant at Booz, Allen & Hamilton.

Noodle.ai is firmly focused on the enterprise, and its mission is to bring business executives, process experts and AI technologies together to address complex business challenges and deliver enhanced outcomes. ZDNet talked to Stephen Pratt to learn more.


Stephen Pratt, CEO of Noodle.ai.

Image: Noodle.ai

ZDNet: How did your career path lead you to forming Noodle.ai?

Stephen Pratt: I have a background in satellite communication and digital signal processing, and started my career doing spooky stuff for the government. Later I was a partner at Deloitte and did the first ever project using teams in India and the US combined, and it worked so well I thought it was the future of technology implementations.

But Deloitte said they didn't want to change their strategy, so we approached the Infosys board with the idea of starting Infosys Consulting: we would do the upfront business part with technology implementation managed from India. They said yes -- and at the time that was a revolutionary thing, because everybody was flying in their entire team every week and flying them home by the weekend. So we started that in 2004 and spent a decade growing it from two of us to, when I left in 2014, 32,000 people and $2.3 billion in my consulting and systems integration group.

Along the way we'd done a lot of work in analytics, and I'd actually done a project back in the 90s using neural networks -- trying to find which ships coming off the west coast were drug smugglers. We wrote a neural network, and you'd hit 'return' and then wait a couple of weeks while it burned through lots of extremely expensive computer processing power. Then it would tell you the answer -- typically about a week after those ships had left!

Presumably that was during the so-called 'AI winter' -- before computing resources were really up to the task...

Actually I'd say it was the 'AI Fall' -- or autumn -- right before the winter. This was when people were very excited about AI, but then the reality hit that the computing power wasn't there.

When I left Infosys I connected with a private equity company, TPG, and said: 'I want to run a company that combines deep expertise in business operations with learning algorithms'. They really liked the idea, so we set out to buy an analytics company and looked at about thirty, and the idea was I'd go run it. We started with $100 million companies, then $50m companies, then $10m companies, then $5m companies...Eventually it didn't look like it was going to happen, and I got a call from a friend at IBM who said 'Why don't you come and run all the Watson implementation for IBM Global Services?' So I went and did that, but it wasn't really my thing. I wanted to design an operation that was custom-built for this new age of technology, and it was much easier to do that as an independent entity than fitting in with something else.

So TPG called me back and said 'We still have all this money; we typically don't do startups, but we love the investment thesis, so why don't you just go start it?' That was the beginning of Noodle.ai: after looking and looking, and not finding what we wanted, we decided to create it. So we got a great team, including my co-founder Raj Joshi and Matt Denesuk, who at the time was the chief data science officer at GE.

See also: How artificial intelligence is taking call centers to the next level

What was the business model for Noodle.ai?

The basic premise is 'AI as a service' -- allowing customers to 'pay by the drink', as it were, to deploy AI technologies in their companies. We started out by looking at the landscape of where AI can apply within enterprises, and we're absolutely convinced that, over time, learning algorithms are going to replace rules-based software to help companies make complex business decisions. Looking a decade into the future, I think it'll be very hard to find rules-based software that's driving business decision-making.

But companies are trying to figure out where to start, and we also saw that many are starting in what we thought were the wrong places. So we spent a lot of time thinking 'what is the highest-value place where you can apply learning algorithms to create economic value for companies and help them serve their customers better?', and we settled on the space of core operations planning.

If you look at companies with complex supply chains, the way they do planning every month -- the 'sales and operations planning' process -- is fundamentally broken. It's basically a group of people coming together each month with bad information, having arguments and then settling on a suboptimal plan, because it's what they could agree to. There's no sense of what risks are inherent in the numbers, and the forecasts are typically based on backward-looking data that's only internal data -- it doesn't take into account things like the economy, the weather, what customers are saying, and all of those things.

That's where we're focused right now, and I think Noodle.ai is the world's most advanced end-to-end platform for enterprise artificial intelligence.


Noodle.ai's in-house AI platform -- nicknamed The BEAST -- is powered by the 1-petaFLOP Nvidia DGX-1 supercomputer, which the company says delivers the computing capacity of 25 racks of conventional servers in a single integrated system.

Image: Nvidia

How is your AI-as-a-service platform architected: do you run your own data centres or use public cloud services, or both?

Training these algorithms is extremely computationally intensive, and doing it in the public cloud would be very slow and prohibitively expensive. Typically, public clouds have several generations-old technology, whereas in our data centre we have about a petaFLOP of computing power: if you tried to replicate that in a public cloud, the economics would turn upside down. But once an algorithm is trained, we will typically put it in a Docker container and run it in the public cloud. That'll be dedicated to the client. Running an algorithm once it's trained is not as computationally intensive, so it'll go into AWS -- we can use other clouds, but typically it'll go into AWS.

How do you position Noodle.ai's solution compared to other companies claiming to offer AI as a service?

The 'secret sauce' for us is the combination of our AI-as-a-service platform plus the deep business understanding of things like supply chain, pricing and demand forecasting. I would say that what Microsoft, Google and others are doing is creating great tools; what we're doing is building great buildings. When someone creates a really good tool, we'll put it into the BEAST [Noodle's in-house AI platform, based on the Nvidia DGX-1 supercomputer]. For example, when Google came up with TensorFlow, we thought that was really cool, so we put that into the BEAST.

If the main pillars of AI are the algorithms, the computing muscle and the data, is it the latter -- data in a suitable format for analysis -- that's usually the main bottleneck?

Usually the most time-consuming part is ingesting the data. We have what we call a 'feature library' that will recommend which data sets customers should look at; we also have our 'data cartridges', which is typically external data -- curated data on things like the economy, weather, events...that kind of stuff. We think of data done well as four quadrants: internal, external, structured and unstructured, and typically we look at every combination -- internal/structured, internal/unstructured, external/structured, external/unstructured data -- to get the most comprehensive view.

An example would be XOjet, which is the largest private aviation company in the US, who were doing pricing basically based on spreadsheets and what they did the previous year. We built a pricing application that takes into account the location of every one of their jets and their competitors' jets, and also what events are happening at each location to try to predict demand. This would generate a price recommendation, and as soon as they deployed our application the company's revenue increased by five percent.

Another problem XOjet had was that too many jets would bunch in one area, and so we built a fleet balance application that anticipates not only the demand for this day, but also the demand for the new configuration for where the jets would be, and keeps all of that in balance through economic incentives. So if you want to fly to somewhere where there are too many jets, it's going to be very expensive -- and if you want to go somewhere where they really need jets, it'll be less expensive. As soon as we put that in, the number of flights with empty jets -- they're called 'deadhead' flights -- went down 11 percent.


Noodle.ai has created an index based on the level of AI and advanced analytics adoption in enterprises, and examined the relationship with stock value. The above example is for major hotel companies.

Image: Noodle.ai Labs

Talking of quantifying the effects of deploying AI, I've seen your Noodle eAI Index: does that have real traction when you publish it for different markets, such as hotels and airlines?

Yes, it stirs up all kinds of controversy! People argue about individual companies, and whether they should be higher or lower -- but it's data-driven, so that's the answer...

Which sectors are leading the way in adopting AI to guide their business processes, and which are lagging behind?

Financial services is clearly in the lead. Online retailers are doing well, but traditional retailers are not doing well. I would say that companies in the 'gig economy' -- Airbnb, Uber...those kind of companies -- are way ahead. And of course, the king of them all is Amazon: the number of learning algorithms that they have deployed throughout the company is really awesome. There's an interesting quote from Jeff Bezos where he says that, although Alexa is getting all the attention, the real transformation at Amazon is deploying learning algorithms throughout its core operations.

We're seeing a lots of companies get distracted by shiny toys when it comes to AI -- like a robot in your hotel lobby to check guests in, or a chatbot on your website. These are things that are generally very expensive to do and have very low returns. I would say that the worst place you could possibly start in AI is to develop a chatbot, but it's amazing how many people are starting there.

There was a study by The Hackett Group in 2016 that looked at the amount of excess working capital in companies, and they quantified that as over a trillion dollars. This is mostly inventory, and if you think about the fact that there are such poor planning processes, it's wasting materials, it's wasting production capacity, it's wasting distribution capacity -- all these things are the core of the economy. There's just tremendous waste in business right now because the core decisions are made by spreadsheets and arguments. We think they should be run by learning algorithms and supercomputers. It's that transformation that will create less waste in the world.

How far has this transformation progressed among leading companies like the Fortune 1000, and what sort of opportunity is there for Noodle.ai?

If you think about replacing every rules-based piece of software that's driving business decisions, that's an unimaginably big market. Most of the effort in IT so far has been putting in systems of record, like ERP and CRM, but these are not intelligent decision advisors. I would say we're five percent, if that, towards maturity. There are very few companies that I would say are advanced in deploying learning algorithms throughout their operations. I would still say the majority of companies have done nothing.

Will C-suite executives need to become more 'data-aware' to do their jobs properly -- for example, if shareholders challenge a strategic decision largely generated via learning algorithms?

All C-suite execs need to understand the power of learning algorithms, to help them make better decisions, to amplify their knowledge, augment their reach, and in some cases, automate routine things, so they can spend more of their time focused on strategy and growing the company. A key part of this is becoming aware of the value of data, but becoming more analytics-aware is a higher-order of necessity than simply becoming data-aware.

One of the important differences between enterprise AI and consumer AI is the need for models to be interpretable -- a focus on 'glass box' rather than 'black box'. Executives making decisions that are essential to their company's future need to know why a recommendation is made. There is a large class of algorithms available that are highly interpretable, which I would say gives you much better information on why you should make a decision than what people are currently using right now -- typically spreadsheets or BI systems. For example you might see: "since weather is bad or economy is low, that contributes x percent increase or decrease to our demand". Enterprise AI should give executives more information to defend their decisions.

This is especially applicable to revenue guidance. An executive could give revenue guidance with a certain percent confidence -- after all, expected revenue is a probability distribution function. Executives need to make a decision as to what they're going to commit to because people typically ask for just one revenue number. Financial analysts might not be ready for that kind of forecast, but in reality, that's a more effective way to run your business.

It may be that conservative companies give a forecast that is mean minus one standard deviation, while aggressive companies might give the mean -- 50 percent likelihood you're above or below your revenue forecast. If you're an ill-advised company you'll give mean plus one standard deviation, and in most cases you will not meet your forecasts.

Is the post of Chief Data Officer (CDO) increasingly vital?

Certainly managing data within a company is becoming a strategic necessity -- thinking of data as a core competitive differentiator and holding latent value. But I also think it needs to go beyond data. One of the dangers of creating a Chief Data Officer is a focus on the accumulation of data, which is necessary, but not sufficient. I recommend considering a Chief Analytics Officer, charged with using data to create business value.

I have seen governance conflicts where there is one executive in charge of collecting data and another in charge of implementing analytics. There can be a lot of friction between the two. I would advise companies to create a Chief Analytics Officer and then have a Director of Data report to analytics.

See also: What is a CIO? Everything you need to know about the Chief Information Officer explained

What are the most exciting, and disturbing, recent developments in AI and ML, from your personal perspective?

To start with disturbing first: as I noted earlier, a recent AI/ML development we are seeing is the high number of companies starting in the wrong place. There are a lot of people focused on things that are very difficult to do, from the data science perspective, and of dubious business value. Our favourite example of this is chatbots. We're concerned that people will spend millions of dollars on chatbots and use that to say AI/ML doesn't work. Instead, people should focus on the highest-value uses of AI/ML in the enterprise, and things that are achievable -- use cases around the business planning process and core business operations are a good place to start.

The most exciting developments? Personally, I'm very excited about the use of deep learning for time series data. Typically, deep learning is applied to unstructured data -- mostly images. It's highly effective at telling whether this picture is a cat or a dog. But recent advances in long short-term memory (LSTM) models apply deep learning to structured numeric data, creating much more sophisticated demand forecasts, pricing recommendations, and so forth.

Image: Noodle.ai

Previous and related coverage

Report: Artificial intelligence is creating jobs, generating economic gains

New study from Deloitte shows that early adopters of cognitive technologies are positive about their current and future role.

The great data science hope: Machine learning can cure your terrible data hygiene

Data lakes didn't quite pan out so now it's all about an abstraction layer and machine learning to save the day. Hopefully, machine learning can cleanse your data on the fly since humans have proved repeatedly they aren't meticulous enough.

HPE launches upgraded high-performance systems for AI applications

The company wants to make high-performance computing and AI more accessible to smaller companies.

Dell EMC high-performance computing bundles aimed at AI, deep learning

Dell EMC at SC 17 outlined a series of bundles designed to bring HPC to industries and enterprises

Artificial intelligence: It's about to cause a major upheaval in jobs

Advances in computer vision, speech, analytics, and mobile robotics promise to affect any jobs related to these skills.

Inside the black box: Understanding AI decision-making

Artificial intelligence algorithms are increasingly influential in peoples' lives, but their inner workings are often opaque. We examine why, and explore what's being done about it.

Editorial standards