If you've ever had the pleasure -- and we use that word lightly -- of pricing cloud computing services, you'll be delighted to know there's a whole new roster of offerings to complicate your buying decision, under the rubric of artificial intelligence (AI).
Also: Automation technologies, AI, and robotics are critical CIO targets
The Big Four cloud computing majors -- Amazon, Microsoft, Google, and IBM -- all offer the ability to construct and run neural networks and other forms of AI in their public cloud computing facilities, and they all have various tools and various prices for doing it. Yet another class of services are provided by the cloud SaaS champs, Oracle and Salesforce.
There are so many choices, with so many idiosyncrasies in their features and pricing, that you might need some artificial intelligence just to figure out which are the best deals.
Fortunately, ZDNet is offering real intelligence: We've studied the various offerings and compiled ways to think about the buying decision.
The good news: There's a lot of overlap in the services, and there are many ways to get started for free. You have choice, and you can start out by dipping a toe in the water.
The less-good news: Your final decision will depend on a careful assessment of what your goal is in a still very nascent field -- machine learning (ML). You may not know until you spend some time working with these vendors' technology just what exactly you want from their services.
Makers versus takers
The first thing to do is to think about yourself and your company in relation to these offerings.
Also: Making sense of Microsoft's approach to AI
Machine learning lets a company find patterns in data. That simple statement encompasses a wide variety of goals, from detecting sentiment in a text document to projecting the next action to take with a customer based on a history of interactions.
To understand that spectrum from a practical standpoint, think of yourself in one of two buckets: Makers and takers.
Makers are those who wish to build some potentially new application, perhaps from scratch, or at least with a heavy degree of customization -- from preparing data, to designing the neural network model that will be used, to how it will be served up. That can involve a lot of experiment with areas of data science and machine learning concepts at the very bleeding edge of the discipline, and revising one's work over many hours in computing time. A maker is one part data scientist, one part IT administrator, and one part business analyst -- or perhaps a team comprising all those abilities.
A taker, on the other hand, is someone who wants to quickly use some kind of AI capability with a minimal effort. A taker may be a marketing exec or sales rep with no knowledge of AI, or an IT admin who simply wants to deliver new capabilities to customers or employees who have to use those applications.
Thinking about the two uses cases immediately begins easing the buying decision.
Those who make AI
Makers build neural networks, train them, and then unleash them on real-time signals, which could be batches of transactional data or individual transactions via a web commerce site.
Also: Mind the gap: AI and machine learning lag in adoption
That requires preparing data, designing a model to test against some data repository, training it on a large set of data, and finally deploying it as a live service.
That means purchasing storage -- for development data, training data, and for the data returned as a result of a query using the live, trained model.
The process with each of the Big Four starts by setting up a cloud account and choosing a storage option. This stage already involves choices -- not just about how much data, but how you're going to analyze that data in your neural network. Google, for example, offers two kinds of pipelines for machine learning data, called Dataproc and Dataflow. Dataproc is optimized for using the Hadoop file system with analysis packages that are meant to handle it, such as Spark ML. Each has different per-gigabyte pricing plans. Dataflow is meant to ingest either batch or stream data via things such as Apache Beam. It is meant to be used for Google's Machine Learning Engine, where one builds models with TensorFlow or PyTorch, or another ML programming framework.
The point is, putting all your data in public cloud is a big buying decision in itself. Unless you've already standardized on Amazon's S3 storage, or Microsoft's Azure Blob storage, you may want to first try out the options with a free account from a vendor, and monitor what kind of economics you'll achieve as you go along. All the vendors offer free accounts for just this purpose, and most of those free offerings will last up to a year, so you have some time to explore.
Plethora of choices
Once you've got the data, you have a plethora of choices for making things. The simplest and most flexible option is the various machine learning engines with which you can build multiple models in TensorFlow and other frameworks. These are Google's Cloud Machine Learning Engine, Amazon AWS's SageMaker, IBM's Watson Machine Learning, and Microsoft's Azure Machine Learning Service. All of them will let you purchase by the training hour, when developing the model, and then deploy based on a number of transactions. You have the greatest freedom with these offerings to bring in different frameworks in which to program models, and to choose the configuration of machine, such as memory and processor cores.
Also: Sensor'd Enterprise: IoT, ML, and big data
At this point, you may also want to consider options for accelerating the task of training or performing inference. Google, of course, makes a play for its Tensor Processing Unit, a custom chip now on its third iteration, that is expressly designed to accelerate the matrix math at the heart of training models. Microsoft promotes use of field-programmable gate arrays, or FPGAs, called Project Brainwave. Amazon, in addition to developing its own chips for running model training, has announced a chip called Inferentia, which will be available sometime later this year. All four offer graphics processing units, or GPUs, which have become the workhorse of model training, to accelerate workloads.
There are several ways to simplify your setup, and the buying process. They include prepackaged virtual machines and containers designed specifically for machine learning and data science. Google offers the Cloud Deep Learning Virtual Machine, Microsoft offers its Data Science Virtual Machine, and Amazon has the Deep Learning Amazon Machine Image. IBM takes a somewhat different tack, promoting its Watson Deep Learning Studio as a dedicated program that can be used to visually drag and drop components of a machine learning model. Microsoft has something similar with its Machine Learning Studio.
A key distinguishing factor for both Microsoft and IBM in all of this is their ability to handle on-premises machine learning. With deep hooks into decades of enterprise wares, the two vendors offer more substantial offerings for companies that want to perform machine learning on their own infrastructure. IBM's Watson Studio can be used behind the firewall to build and train models, which can then either be deployed in the cloud, or deployed to the local data center with the option of Watson Machine Learning for Private Cloud. Another option is IBM's Watson AI Accelerator, a software stack running on the company's Power line of servers on premise. IBM advises this for building out large-scale deployment of heavy deep-learning AI models.
Similarly, Microsoft's Azure ML Studio can be used behind the firewall to design neural networks, drawing training data from the company's SQL Server database. There is also a version of Azure Machine Learning that's a licensed server product for on-premises deployment. Analytics functions can be constructed natively in SQL Server. And even the public cloud version of Azure Machine Learning can draw data from the on-premises SQL Server. Clearly, there is a plethora of private and hybrid functions.
In both IBM and Microsoft's case, a strong argument for on-premises is that the biggest use of data is during the training period of a new neural network. If customers can do that work in their own data centers, they stand to save a bundle on buying storage in the public cloud.
Whichever vendor you go with, you'll want to scrutinize the programming frameworks and tools each one offers. All the Big Four support the most popular AI frameworks, TensorFlow, and PyTorch. Amazon and Google tend to support a greater breadth, including Sci-kit Learn, MXNet, Rapids, Spark ML, and XGBoost. There are some that have become dividing lines, such as the ONNX framework to establish a common framework between models, supported by Microsoft and Amazon, but not Google. IBM has its own package for data analysis forms of machine learning that's unique to it -- SPSS Modeler. You'll have to double check if your favorite framework is supported.
All of the services, in addition to offering special workbenches such as Watson Studio, allow you to use popular tools for prototyping neural networks such as Jupyter notebooks or Pandas. Your biggest question as you test these services is how easily you can move data and models in and out of the rest of the cloud workflow.
Taking AI on a consumption basis
Let's face it: A lot of people talk about AI when all they really want is to perform some simple data analysis without conducting fundamental data science. For those who would rather skip a lot of coding, there are a growing number of APIs that can be plugged into an app, or prepackaged solutions that deliver a ready function such as understanding natural language or running a chat bot.
Also: IBM takes on Alzheimer's disease with machine learning
More and more, vendors are moving to new ways to simplify building things. Google offers AutoML, which basically gives you the model for image processing (face recognition and object recognition), natural language processing, and language translation. This means you can skip a lot of the work of building a neural net from scratch. Microsoft in December unveiled "Automated ML," which searches through many machine learning models in order to find the best one for a task, eliminating the design challenge for users. IBM later this year will release as a beta something similar, called Neural Network Synthesis, or NeuNetS.
In a similar vein, Amazon offers a raft of AI/ML services that include Comprehend, which identifies phrases, names of people and places, or brands, in text documents, among other things; Rokognition, which identifies people and objects in images, and can spot inappropriate content; and Forecast, which makes predictions when fed historical data by combining time series analysis with other data, such as product information, using machine learning.
Like Google's AutoML, Amazon's AI/ML services let you forego specifying a neural network model; simply run a script and the system tries a bunch of nets and you let it know when it arrives at predictions that fulfill your objective. APIs let you incorporate the results of predictions into your applications.
Microsoft offers Azure Cognitive Services, including vision, language and speech services, to classify images, understand spoken phrases, and create question-and-answer sessions from documents such as an FAQ.
IBM's Watson offers a raft of services within categories such as Knowledge and Data and Speech that offers functions such as text-to-speech, speech-to-text, and the Knowledge Catalog, which can find, curate, and categorize data within meta-data you feed it.
In each of these cases, you not only don't program, you don't have to provision infrastructure services from the major vendors. You simply set up your data in the cloud and pay by the amount of characters or documents or images you want, in varying rates from each vendor.
Many of these APIs are an extension of the idea of serverless computing, where programming functions can join many different functions together. Hence, each vendor's cloud serverless functions can be used as glue to tie together these AI and ML services. They include Amazon AWS's Lambda architecture, Microsoft's Azure Functions, Google's Cloud Functions, and IBM Cloud Functions. For takers of AI, serverless functions will be an increasingly important glue to stitch together lots of capabilities rather than writing everything from scratch.
More and more, the vendors are adding functions that make these basic machine learning tasks behave like finished applications. Discovery News, for example, can analyze blogs and news reports for categories and sentiments. Google is relatively new with packaged offers, having recently rolled out Contact Center AI, a call handling app that uses virtual agent technology, and Cloud Talent Solution, a job search program.
The future is embedded AI
The next step for makers and takers alike is to incorporate AI into much larger applications. Known as embedded machine learning and AI, such programs are especially well represented by two giants of enterprise applications: Oracle and Salesforce.
Also: How to Implement AI and Machine Learning
Oracle has a solid pitch for makers who want to start from their data repository and work outward from there. Its Platform-as-a-Service, or PaaS tools such as the Autonomous Data Warehouse and the Data Science Cloud are data stores that embed the ability to develop and train neural network models, using TensorFlow and Sci-kit Learn and other popular frameworks.
For those who are makers, Oracle offers a suite of what are known as Adaptive Intelligence applications, in the domains of customer experience, enterprise resource planning, and manufacturing. These applications act as add-ons, for a separate fee, that integrate with Oracle's traditional apps in those areas. Models built by Oracle will yield insights such as a next best action for a sales team, or how to provide optimal discounts to suppliers. Oracle enhances the offering with what it calls 'Firmagraphics' -- data on companies and industries that the company has amassed through a number of acquisitions.
Salesforce stakes out a position firmly in the taker camp, with its Einstein family of machine learning functions meant to enhance its selling and marketing and customer service apps, similar to Oracle. Within an application for sales, for example, a rep will see lead scoring of prospects, based on an assemblage of neural network models that the company runs under the hood, as a tournament of competing machine learning.
The makers -- the Salesforce admins in a company responsible for providing the applications to enterprise users -- can deploy the capabilities without engaging in the design of models. Instead, they turn on capabilities with the help of prompts from the programs that recommend features suitable to the organization, which can be customized to the firm's needs.
Oh, the prices you'll calculate!
Have your calculators ready -- or, better yet, reach for an online bill calculator, because machine learning in the cloud involves a variety of pricing models that achieve a somewhat complex equation.
Also: The next step for machine learning and AI TechRepublic
The Big Four pricing plans for doing the most sophisticated AI development and training are generally broken down into separate training and inference pricing. Hours of training are then multiplied by various forms of units of capacity, to reflect the compute power you're using depending on the kind of compute instance you select.
There are exceptions. For example, IBM prices its Watson Machine Learning as a combined training and inference cost, somewhat reflecting the view that training may be done offline, behind the firewall. Microsoft doesn't charge for training, it says, although you still have to pay for the underlying virtual machine instance.
Choosing acceleration chips, such as GPUs or Google's TPU, adds another cost on top of the base price.
For some of the API choices, such as video search, image categorization, or text to speech, you'll pay in allotments of pennies or dollars per minute of video, or thousands of images, or thousands of characters of text, based on how frequently you are sending API requests to perform inference.
Still other modules are on a per-seat basis. IBM charges $99 per user, per month, for the cloud version of its Studio neural network design, but $199 per month for a desktop version. Another fee is charged for local installations behind the firewall.
Oracle's Adaptive Intelligence apps range in price for the different bundles but are charged based on a per-user license, with the CX version, for marketing, sales and services roles, costing $1,000 per month per user, plus $5 for every 1,000 interactions per month.
Salesforce applications are included with the Unlimited version of the company's Lightning platform, but for other cloud SKUs, there's an extra charge of $4,000 per month that increases depending on the units of millions of predictions you ask of the software.
Also: Turing Award honors pioneers of AI CNET
Remember that in several cases, customers will end up amassing store of credits, such as in Oracle's system, which can then be allotted to services on a case-by-case basis. Consequently, spending may be a matter not merely of budget allocation but also deciding how to spend credit already collected with a given vendor.
Using the online calculators can be helpful, but your best bet is to try the free version of each application. This way, you can get a feel for how machine learning training time adds up, in the case of building or customizing machine learning models; how much data you'll have to use in the cloud; and at what rate you're likely to draw predictions from any of these systems. Especially for the last item, the meter is running once you go live with an AI model, and will keep running for as long as you and your users keep asking the system for predictions.
Google has arguably the deepest portfolio of machine learning technology of any of the Big Four. You could do worse than use the company's own developed algorithms in its AutoML service. And the Tensor Processing Unit chips are a unique offering for those in the market for AI acceleration. Google's control of the ubiquitous TensorFlow framework for machine learning implies you're in especially good hands if that's your development platform of choice.
Amazon has been in the cloud computing business longer than anyone, so the breadth of offerings to complement SageMaker is substantial, and many may already be familiar with pricing and buying in the Amazon system. The company's marketplace of third-party machine learning programs that can be added on top of Amazon's own is superior to others. The introduction of custom ARM-based processors for cloud compute will be complemented later this year by Amazon's first home-made inference chip.
As a pioneer in speech and vision and natural language processing, Microsoft's Redmond research labs have endowed the software giant with a substantial claim to greatness in modern machine learning, which should inform the company's cloud AI offerings in those functions. Microsoft can also provide an on-premises or hybrid cloud machine learning experience with enterprise applications such as SQL Server that embed analytics and machine learning capabilities. Its development of the open standard ONNX technology for AI model portability also sets the company apart, as does its development of FPGAs as acceleration tools for machine learning inference.
IBM has the richest set of tools to take AI from a company's internal data sets all the way to publicly accessible web services that deliver analysis. The company's Watson Studio acts as a hub that can coordinate the reaching into on-premises repositories such as Db2 or Oracle DB; clean up and prepare the data via multiple programs such as Data Stage or Cloud Private Data; analyze it with applications such as Knowledge Studio; and then deploy predictions to the web, all built upon a modern Kubernetes architecture. IBM's decades of interaction with transaction processing systems means an added ability to perform machine learning on things such as fraud and get a result within a window of milliseconds necessary for every transaction.
With decades in transaction processing and the database that stores the vast majority of enterprises' data, Oracle is well positioned to make machine learning a function within the same user interface that customers use daily. The company has coupled its infrastructure-as-service offerings, such as bare metal computing, to its extensive developer platform in the cloud, as platform-as-a-service, to make possible autonomous programs that speed up database functions by anticipating much of the analytic work that would have to be done by hand. Programs such as the Autonomous Data Warehouse can then feed into the Applied Intelligence applications to deliver line-of-business predictions such as which customers are more likely to be closed in a given time frame, or which suppliers should be given special payment terms.
The Einstein suite from Salesforce offers the same simple, pure approach that the cloud company pioneered, a minimum of engagement with the messy details of provision and deploying software and systems. The focus is applications that sit atop the company's existing cloud-based commercial apps, making deploying and consuming machine learning as easy as possible for admin and IT worker. No machine learning development is required for an admin to turn on functions, and predictions, such as the next best action for a sales rep, are surfaced in the context of the apps they already use. Salesforce can draw upon 20 years of customer trends as data that fuels the predictions of the embedded algorithms of Einstein.
In addition to the Big Four, a number of young companies are offering overlays to cloud computing that aim to speed machine learning model training and deployment, and that in some cases can offer lower rates on compute and storage by amortizing costs across many users.
Straight out of Brooklyn, New York, the Intel-backed startup offers a job scheduler called Gradient that handles the details of running neural networks in the cloud. You install the company's command-line on your local machine, turn on a Jupyter notebook, pre-packaged with machine learning frameworks, and runtimes all in a Docker container that packages up your model, which is then submitted to Gradient to be run in a cloud instance. You pay either by the hour, with rates varying by CPU, GPU, or TPU, or for a flat monthly fee of $8 for teams, with other rates for enterprise use. Data storage charges also apply.
With an illustrious crew from Microsoft and Oracle, and backers such as YCombinator and Gitbhub, FloydHub aims to simplify model deployment via a simple command-line interface connecting to cloud computing instances, similar to Paperspace. The company offers monthly plans of $9 for individuals and $99 for teams, as well as the option for per-second pricing.
Run by former Citrix Systems CEO Mark Templeton, DigitalOcean claims it can get your compute instance in the cloud up and running in as little as 55 seconds, using pre-built virtual machines with choice of Linux distributions, called droplets. An API lets you start and run multiple droplets in parallel and tag each one to filter job instances. Prices start at less than a penny per hour and offer a wide array of compute configurations. A cluster of Kubernetes application containers can be had for $30 per month.
The Baidu-backed startup promises to let you test thousands of different models on multiple cloud instances from the command line. Infrastructure costs range from 27 cents per hour up to $6, depending on GPU selection, with a terabyte or model and data storage for $23, plus extra fees for pro and enterprise tiers. The company cuts the fees of normal cloud jobs by storing persistent Jupyter notebook instances and repeatedly re-starting spot GPU or CPU instances after they stop running.
Designed to be ultra-fast machine learning setup, a web-based dashboard starts you off with a blank project template or a Github template that lets you clone a Github instance. Click a button and you're up and running in a Jupyter notebook online. The service features only one instance at the moment, an Nvidia K80 GPU with 15GB of memory attached to a four- core CPU and 50GB of space. Pricing starts at $10 per month for individuals and $49 for a professional plan.
Innovative artificial intelligence, machine learning projects to watch