X
Home & Office

AI use extends beyond scientific boundaries

Artificial intelligence today used in various verticals to increase business efficiency, lower costs and reduce human risks, observe market players.
Written by Ellyne Phneah, Contributor

The use of artificial intelligence (AI) today goes beyond scientific industries as it offers benefits of reduced operational costs and human risks, as well as better management of complex data, note industry researchers. However, they also warn that the technology is still relatively new and under developed, and companies need to also consider business ethics and user privacy.

Artificial intelligence is currently widely used across several industry segments, Rafael Banchs, a scientist with the human language technology department of Institute of Infocomm Research (I2R), said in an e-mail. He added that there is already some degree of AI in almost every automated system involving sensors and actuators, or moving mechanisms.

According to Banchs, the primary reason for using the technology is to reduce operational costs by replacing the human workforce with AI agents. It is also deployed to reduce risks to human personnel as dangerous tasks can be reassigned to artificial intelligence.

In the oil industry, for example, he explained that the technology is used at different stages of the value chain such as during the exploratory analysis, where meaningful information can be extracted from sub-surface seismic and electromagnetic images and relayed to geologists to understand earthen formations.

Henrik Christensen, KUKA chair of robotics at Georgia Institute of Technology, also said in an e-mail interview, that AI is based on "managing diverse or messy" data and complexity. KUKA is a German manufacturer of industrial robots.

"For a task such as packaging 200 objects onto a pallet, we cannot compute an exact solution as it would require weeks and months [to fulfil]. Only through the use of AI can we deliver a solution within a specific time limit," Christensen said.

He noted that AI is already used by IT vendors such as Google, Yahoo and Microsoft in their search engines, as well as consumer companies such as Walmart and Macy's in material distribution to plan material flow and configuration management for transport.

AI is also used for generating scenarios for education and training in the military, business and other domains, said Mark Riedl, assistant professor at Georgia School of Technology's College of Interactive Computing.

He explained that in military training, the trainees outnumber external support staff so there is little provision for individualization, hence, limiting training experiences. Similarly, institutions that provide training for autistic children and young adults have limited teaching resources to provide individualized care, which can help their students with quality of life issues.

Riedl said: "Artificial intelligence solves these problems because the problem domains explicitly require scalability that cannot be easily addressed through human efforts."

AI still new, face security issues
Banchs noted that although artificial intelligence has evolved and matured over the last 30 years, and has been able to provide real solutions to specific problems in many areas, it is still a field in development.

Elaborating, the I2R executive said most solutions provided by AI algorithms and methods are either partial or prone to errors, and critical systems and procedures must still involve human intelligence "at some point in the loop".

"Human's role in these cases are to monitor the operation of the system, as well as get involved in decision-making processes when the system confidence is below certain predefined threshold," he explained.

Christensen warned that security also is an issue, and added that there is a need to consider business ethics and user privacy with the use of AI in verticals.

For instance, he said the purchasing habits of consumers will be tracked and this information will be shared with others.

To ensure that data which involves people will not be compromised, privacy and ethics in terms of how data is acquired and used must be considered by these verticals, Christensen advised.

With regard to using AI for training, Riedl noted that incorrect training scenarios may also waste a learner's time and result in "negative learning" where one learns the wrong things. This problem cannot be easily resolved as the creation of scenarios is a creative task and it may not be mathematically possible to place perimeters to ensure an AI system cannot generate unacceptable solutions, he said.

However, Riedl maintained that for military training, verifiability is more easily done due to the rigorous tactics, techniques and procedures adopted by the military. As for social scenarios, the reliance is on humans to provide examples or directly verify solutions, he added.

Editorial standards