BUILD WHAT'S NEXT | A ZDNet Multiplexer Blog What's this?

How to talk to a cloud

How do speech-to-text Natural Language Understanding (NLU) advantages run by systems administrators for users on the front end help manage cloud resources more effectively?

Managing cloud computing resources is difficult, this is one of technical life's great truths. But like all other aspects of our existence on Earth, once we accept this fact, we can start to transcend it and overcome the obstacles in front of us.

Acceptance issues aside, what appears to be surfacing in terms of our onward approach to cloud in the age of serverless computing is not just a question of binary on-off decisions.

Very often it's not just a case of software tool A versus tool B.

In a hybrid multi-cloud serverless world, we often need to use both choices, plus an additional peppering of tool C that may have never initially come to mind when we first started architecting towards the cloud model itself.

Nowhere is this multi management tool 'phenomenon' more clearly evidenced than in the data analytics functions we use to examine and interact with cloud instances.

If we want to 'talk' to a cloud application or workload to find out more about its status, we can immediately see that there are three distinctly different routes to starting the conversation. Longer term, there could be well be more than three.

The power of language

New advances in speech-to-text processing is enabling us to build Natural Language Understanding (NLU) systems that can engage with human beings on our own terms.

Being able to deal with not just different peoples' accents, but also the idiomatic peculiarities that we use through semantic interpretation allows NLU to contextually work out what we probably mean for a given command.

From NLU we can build Natural Language Query (NLQ) technology, which will allow users to ask questions of the cloud systems they are running to ascertain where resource allocation is available.

Hybrid multi-cloud deployments offer a matrix of possible configurations and we humans (in this case, we're primarily talking about cloud systems administrators, architects and software developers) need to be able to work out what should go where very quickly, often in very short decision windows.

Being able to 'NLQ query talk' to a cloud at its heartbeat to ascertain where to put or shift each workload is a huge advantage.

This will allow us to find the cheapest, most efficient, most suitable power for the application or database job in hand within acceptable limits for latency and data compliance.

AI is the second way

But, thankfully, human control and interaction with cloud computing system management functions only goes so far. We can also talk to clouds using automation in the form of Artificial Intelligence (AI) that is capable of learning what works best from historical transactional data and log file analytics.

Talking to a cloud through implementation of AI intelligence means we humans effectively stay silent, but the conversation itself is still very much there.

In this case, an AI engine is directed to engage in with a customer's cloud deployment in order to learn the cyclicality of the firm's use case scenarios, while it also looks to identify spikes, peaks and troughs.

If we build this type of AI control brain correctly (remember, it's still just software code), we can start to finesse its neural power outwards to incorporate events, seasons, stock prices etc. outside of the customer's smaller universe of operational data.

Modelling third base

Thirdly, we can be slightly less esoteric and also 'talk cloud' via the data model that we establish to govern our cloud deployment from the outset.

Data models are an essentially abstract organisation of the elements and objects that go to make up an application or database structure and the relationships that they should have when they are allowed to run and execute.

If we're talking about having 'cloud conversations' as we are here, then the data model could perhaps be likened to the lexicon of language that is available to us. We can't start asking questions outside that language set until we extend the model itself.

Crossed lines & chatter?

As we said at the outset, we might think about using some or indeed all of these methods to talk to cloud instances and work out how to best manage our use of available.

Theoretically, if the architecture and engineering is correct, then there should be no danger of crossed lines and chatter is we talk to clouds via more than one channel.

That being said, the option to combine, corral and coalesce all cloud talk channels into a single interface will help avoid any too many cooks scenarios.... and this is certainly a current trend in cloud management circles.