Are you ready for AI?

Artificial intelligence has gone beyond a gimmick to become a business tool you will almost certainly deploy in the future. But, as Simon Sharwood discovers, you may already be using AI without even knowing it.



Imagine a team member who remembers every scrap of information he or she ever encounters, learns from it, shows up for work everyday, and performs with unerring accuracy, reliability, and integrity.
Such a worker may sound like the intended outcome of Australia's impending industrial relations laws, but you are more likely to find one being developed by artificial intelligence (AI) researchers and software developers around Australia and the world.

These researchers are not, however, attempting to replicate human intelligence -- that is a task that would require work by dozens of scientific disciplines and is currently beyond them all. The main obstacle to replicating human intelligence -- ie, AI doing all the things a real person does -- is the human brain's complexity. There are 10 to the power of 15 connections between the synapses inside our heads, a number that is all-but-impossible to replicate.

Another issue is that while we understand the anatomy of the brain in broad brush strokes, the precise electrochemical mechanisms that drive thoughts and deeds are poorly understood. Finding ways to program them into a machine is an almost literally mind-boggling task.

Rather, because building a brain is so hard, most AI researchers instead work to replicate the way humans think in software, a goal worth pursuing as people are pretty good at solving problems.

Take this example: ask a human to recognise a dog, and most will succeed even if their previous experience of canines includes only a single chihuahua and a single great dane. Ask a computer the same question, that also was only offered a data set that described both breeds as a dog, and it would struggle to classify a labrador as also belonging to the canine family and would probably mis-classify a sheep as a dog as well.

The human ability to make classifications and judgements that let us recognise dogs is not supported by most common data processing tools. A database knows where data is stored, not what it is or how to classify it. Yet for the contents of a database to accrue value, it is necessary to classify what it contains so different data can be treated differently. However, using actual humans to classify data is immensely expensive, making AI a worthy aim.

The earliest attempts at baking AI techniques and custom reasoning into code to improve performance and imbue it with abilities that simpler software cannot match were "expert systems". Popular in the 1980s, these aimed to reduce experts' knowledge of reasonably limited problem domains to a knowledge base that was subjected to a series of Boolean rules and other logical processes to extract insights that replicated human thought processes. Users were typically led through a questionnaire to use an expert system.

The technique quickly gathered a reputation as being problematic because, as the numbers of rules in an expert system grew, the potential for conflicts between them escalated. Many expert systems ground to a halt due to many problems requiring lengthy and costly debugging. Since then Moore's Law has been kind to expert systems by offering them sheer processing power to make the technique more viable, but other more sophisticated techniques have since come to the fore.

AI at work today

Sydney company iOmniscient, for example, uses AI techniques called heuristic algorithms and neural networks in its "IQ" video analysis software -- the likes of which is used for security monitoring. The heuristic algorithms approximate the human ability to assess the best path to solving a problem, while the neural network is a technique of breaking down a problem so it can be processed by software objects designed to address small sub-problems, then invoking more and more specialised objects as needed until the matter at hand has been attacked by as many experts as has been possible to muster. The neural network directs this interchange of data and delivers a result.

You have to ask what a human would see -- what reasoning would humans go through to recognise this object?

Rustom Kanga, iOmniscient

In IQ's case, the AI techniques do an almost superhuman job of detecting suspicious activities captured by video cameras. In a demonstration shown to Technology & Business, the software picked out a black briefcase set down and abandoned on the black marble floor in one of Australia's most security-sensitive buildings. The briefcase is invisible to the eye on a black and white closed-circuit TV feed, yet the software identified the case and placed a red rectangle around it within seconds. In a production environment the software would then have ensured the image was appropriately displayed in a security control room so investigations of the suspicious object could follow -- a good example of some simple AI techniques improving human performance.

The software is used in airports and other security-conscious environments where its abilities to, for example, tell the difference between an empty and stationary baggage trolley and an unusually rapidly moving trolley laden with bulging suitcases makes it an invaluable security aid.

But achieving this kind of outcome is not easy. "First you need to use the statistical approach to analyse an object," says iOmniscient CEO Rustom Kanga. "You have to ask what a human would see -- what reasoning would a human go through to recognise this object? Then you have to analyse the object's characteristics to see if it is a threat."

Their work is far from unique, according to Peter Cheeseman, program leader at National ICT Australia's (NICTA) symbolic machine learning and learning acquisition program.
"It is all about modelling," he says, and much AI is therefore applied to areas such as business, whose inputs and outputs of cash, production costs and other variables are already extensively studied and therefore easily modelled.

"There is a lot that is generic about every business," Cheeseman says. "The stuff you learn in business school like cash flow is easily modelled."

iOmniscient's success is therefore at least in part due to its willingness to create models that have sufficient detail and uniqueness to make its video analysis application possible. Indeed, according to Kanga, the company consults widely with all manner of experts in human cognition, then labours to encode real human experience to ensure it can make accurate analyses.

"How do you recognise a pickpocket?" Kanga asks. "You need to find people with amazing qualifications and experiences to help quantify what you are seeing."

Anyone up on anti-spam techniques will know that those same model-making techniques are used to power security software such as spam filters.

"The first thing we do is heuristic analysis," says Dr Richard Cullen, SurfControl's Technical Manager for e-mail products and technologies. "If an e-mail has a subject line, space, then random text, if you and I look at it we can see it is spam right away."

"If you try to classify it as spam by matching it against a known database of random strings you would need an infinite database. So instead we look for the combination of the three," he says. Adding that it is assisted by more than 3000 rules that model different qualities that indicate a message is likely to be spam. Those rules are weighted to mimic the human thought process of ascribing greater significance to some data based on prior experience, so that the characteristics of previous known spam e-mails are used to help the software make its decision.

Those rules are weighted using another AI technique, genetic algorithms, which apply Darwinian processes to weed out poor results and reinforce good ones to further enhance the model of what is spam and what is not.

"By combining the algorithms in a different way and capturing real human knowledge about the target we can create a system that copes with the human input," Cullen says.

Future of AI

SurfControl's software and the AI that powers it are very sophisticated, yet also very limited. The company employs more than 50 people, for example, to screen URLs for its Web-filtering software. Without their efforts to create and update its models, the AI would be working with bad data and would produce poor results.

NICTA's Cheeseman therefore thinks that an important advance in AI is systems that can change their own models.

"Business is about wielding information to assist decision making in the face of uncertainty," he says. And while businesses can be modelled using various AI techniques and different results can be derived by arbitrarily changing variables, human intervention to create the model is still required.

Cheeseman hopes this will change over time. "I am trying to build intelligence systems that reason their own operations," he says. "Such systems would change the models to which they apply AI techniques as and when needed."

"If a customer pays their accounts regularly each month and then misses one payment, this should be flagged by the AI," he says, as in most models the customer in question would be classified as delinquent and dealt with by sending of a letter demanding payment. For customers who only miss a payment while holidaying, such a letter can easily erode their loyalty, an outcome that has the potential to hurt a business.

Cheeseman therefore hopes this kind of incident will instead prompt systems to adapt their models.

"Instead of making a snap judgement, you really want to compute the probability this customer will pay their next bill. That means you can make a more sophisticated decision about whether or not to send a reminder letter," he says. Just the kind of thing a real, live person would do by the application of intelligence.

AI's new wave

Cheeseman emphasises that such self-adapting systems are at least half a decade from development, but in the meantime other sources of AI are already going down the road of making the technique easier to deploy and, importantly, also far easier to adapt so that models can be changed.

This new wave of AI uses a new class of software called "business rules engines", and one of the world's leading sources of such programs is Canberra-based Ruleburst.

"Before databases were commonly available, data was buried in applications," says Ruleburst CEO Surend Dayal. "Then the SQL revolution happened. Data and the application were separated."

Business rules engines aim to do the same for AI, by removing the need for applications to contain hard-wired business logic and instead offering a separate, dedicated entity for business rules that can apply various logical techniques including AI. Applications therefore refer to the business rules engine to understand the logic to apply to data, which is sourced from the database.

Dayal believes the approach is powerful because it allows AI techniques to be applied to any data and application, instead of being bound by the data-processing techniques built into an off-the-shelf or custom application.

The system is also designed for ease of use. "One element is a business rules repository where you store and manage rules," he says. "The other is the business rules engine that is accessed by production systems."

To write business rules to operate in this environment you just need a PC. "To create rules you use a Windows application that runs on top of Microsoft Word," he adds.

Agents on your trail

Another rising idea for improving AI systems is software agents -- small pieces of software that can encapsulate tiny pieces of logic or complete business processes.

Software agents are designed to do away with rules altogether and simply drive towards a desired outcome using whatever resources are available.

"The idea is to have agents that are very responsive to their environment and on the basis of changes they notice they execute some kind of business process," says Michael Georgeff, professor of information technology and medicine at Monash University.

"They are called agents because they respond to certain stimuli and the key is to build systems of these agents so that each does its own thing."

"Instead of having 50,000 processes or rules in one agent you have hundreds of agents each with own function and the ability to drive only a few processes. When you write a normal program you have to explicitly describe what you want to happen and explain every step and contingency. But if an agent is 'goal directed'; instead of giving them tasks to do you give them a goal to achieve and you give them a whole lot of different ways to achieve those goals in different circumstances," often by invoking other agents.

The result, he says, is a system that finds a way to get things done as quickly as possible, just like a human that refuses to be bogged down by rules and instead makes intuitive leaps to use all available resources to solve a problem.

"The agent choses how to achieve the goal," Georgeff says. "If something goes wrong, the agent can think of other ways to achieve it. Humans do this all the time. If you are driving and find the road is blocked, you just call up another process and find a way to get to your destination."

With all of these innovations in AI emerging, it therefore seems likely that businesses will soon go on a similar journey that explores many options to reach a destination. The power of AI and its applicability to business problems can no longer be doubted. Nor can its accessibility: all of the software discussed in this story can run on commodity Intel-powered computers.

The proliferation of AI ideas and their proven ability to at least enhance today's applications means that before long many of us will be required to make our own decisions about the best way to put it to work.

It may even be a decision to relish. After all, if AI keeps evolving as fast as it has done in the last decade, how long will our liberty to make that decision last?

Case study: AI passes 2600 tests a day
At Sydney's South Eastern Area Lab Service (SEALS), artificial intelligence works hard to save human lives.

SEALS provides diagnostic pathology services to half a dozen Sydney hospitals, a task that sees it process tissue samples for more than 1500 patients each weekday, then another 700-plus on Saturdays and Sundays.

Sue Acland, SEALS' laboratory manager, Central Specimen Reception, says this volume of work means a busy lab and also a very busy data entry operation.

"Each sample has to include all the patient's demographic details, the location in the hospital or status as an outpatient, the doctor who is their service provider, the financial details for billing, the specimen type received, and the tests requested," she says.

The latter item is the most important as tests can have urgent implications for the way patients are treated. But it is not always easy to describe what tests are required. SEALS operates seven different laboratories and the work performed by one can impact later work on the same sample.

"It is hard for staff to remember exactly what is required," Acland says. "The people that do data entry are not rocket scientists although many have science degrees. There is an awful lot to keep in your head."

Other errors in the other data can result in unwelcome extra work to ensure the correct payments are made. The Health Insurance Commission also audits health services providers to ensure their work is of the highest quality.

The complexity of the data SEALS creates and enters means that in the past errors in the system were not always detected. "With my very best people we found 80 percent of errors," Acland says. And in the resource-sensitive health industry, hiring more or more-skilled resources is not an option. SEALS therefore needed a way to check the data it created with the degree of expertise offered by expert humans, so it has an understanding of whether or not the tests requested were sensible given the patient's condition, the doctor involved, and many other factors.

AI offered the organisation an answer. Today, SEALS uses an AI application called LabWizard from Sydney company Pacific Knowledge Systems (PKS) to tackle the problem.

LabWizard uses an AI technique called Ripple Down Rules that makes it possible to create thousands of rules in natural language and filter data through them in ways that avoids the conflicts between rules that bedevil expert systems. Ripple Down Rules even work when rules are added in an ad-hoc fashion with little regard for potential conflicts -- a quality that makes AI more accessible.

The software runs on garden-variety Intel servers and aims to collect the data subject matter expert like a pathologist has gathered in their years of work and mimic these experts' thought processes and knowledge.

For SEALS, the result is a system that can examine the data it collects and, if it detects anomalies that are unlikely to have been the result of correct and/or appropriate human decision, flags it for actual human intervention.

The effect of using AI in this way, Acland says, has been a time-saving equivalent to one full-time worker. Acland is also impressed that she and her team can add rules to the system without needing Pacific Knowledge Systems' assistance.

"We program in natural language," she says. "A lot of it we can do ourselves and it's only the more complicated ones we send to PKS. We have scheduled some more time for training on programming and after that I think we will only need their help for the most difficult types of scenarios."

"What we did before was very similar but very labour intensive and prone to missing things," Acland concludes. "LabWizard doesn't miss anything. There's real potential there to save a life."

This article was first published in Technology & Business magazine.
Click here for subscription information.