Increasingly sophisticated software and computer algorithms are being developed to power everything from Apple's Siri voice-enabled assistant to IBM's Watson supercomputer, allowing machines to better make sense of human interactions by teasing out relevant, contextual patterns.
However, analysts and IT vendors believe as the software gets smarter and more automated, it will not mean the end of human participation. Rather, the way people "teach" these machines will change and while the level of human involvement may be less, it is no less important for the machine learning process.
Michael Barnes, research director at Forrester Research, defined this push as smart computing in which a new generation of technologies that combine elements of hardware, software and services come to address business problems requiring real-time information, data analysis and action.
With examples such as IBM's Watson project, Tony Baer, principal analyst of IT practice at Ovum, said there are finite domains of information that can be fed into a machine and the algorithm can allow some degree of "context-driven synthesis of facts".
These examples, however, are not representative of a machine independently learning on its own, he added.
Adding on, Foong Sew Bun, chief technologist and distinguished engineer at IBM Asean, said computer algorithms are evolving to become more complex and, in some cases, able to process information in a way that is similar to how people think. Watson, for example, shows how algorithms can be fine-tuned to quickly analyze, understand and respond to big data challenges in a "near-human fashion", he said.
That said, Foong noted these technologies have not reached the stage where they can understand the full breadth of the human communication process, as this involves intangible factors such as feelings and instincts.
People factor remains relevant
With that in mind, Baer said machine learning will always need human input and direction at some point as the computing systems will not make free associations or value judgments on their own.
"We will never take people out of the equation," he stressed.
Frank Seide, senior researcher and research manager for the speech group at Microsoft Research Asia, said the change will be in the nature of how humans educate machines. For instance, in the early days of artificial intelligence (AI) research, people taught rules to computers by writing the program codes, he noted.
Today, in many cases, humans teach computers with examples as seen in the case of machine translation and speech recognition, Seide said.
"We are now more and more moving toward teaching computers how to discover these examples themselves. The unsupervised learning of knowledge graphs is an example, but we still provide input in the form of questions or topic areas we want to have answers for, at increasing levels of abstraction," he explained.
"At some point, computers will be able to discover questions and pursue them--to be curious--and that's probably the point at which we'd have achieved true artificial intelligence. But I certainly hope there will never be a point when computers stop listening to our questions!"