X
Innovation

Two robo-advice tools shut down over ASIC concerns

The Australian Securities and Investments Commission said the automated tools were "inadequate".
Written by Aimee Chanthadavong, Contributor

Sydney-based financial services licensee Lime FS has agreed to voluntarily shut down two of its robo-advice tools after concerns were raised by Australian Securities and Investments Commission (ASIC).

The robo-advice tools owned by Lime FS -- Plenty Wealth and Lime Wealth -- are authorised to provide automated personal financial advice to consumers about life insurance, budgeting, tax, investments, and superannuation. Both tools operate using algorithms and technology, without direct involvement of a human adviser.

However, ASIC said after reviewing a sample advice file from Plenty Wealth and Lime Wealth, the quality of advice generated by the automated online tools were "inadequate". In some instances, the advice generated by the tools conflicted with client goals or other recommendations also generated by the tools.

"Digital advice tools offer a convenient and low-cost alternative to consumers who may otherwise not seek personal financial advice. However, the advice provided through these tools must meet the same legal obligations required of human advisers -- the advice must be appropriate to the client and comply with the best interests duty," ASIC commissioner Danielle Press warned.

ASIC said it was also concerned by Lime FS' ability to monitor the advice generated from these tools.

"ASIC expects AFS licensees and financial advisers using or recommending digital advice tools to ensure that they adequately monitor and test the advice for quality and appropriateness," Press said.

See also: Artificial intelligence ethics policy (TechRepublic)        

It's cases such as these that have sparked the likes of the Commonwealth Scientific and Industrial Research Organisation (CSIRO) into calling for the development of artificial intelligence (AI) with a sufficient framework to ensure nothing is set onto citizens without appropriate ethical consideration.

Data61, CSIRO's digital innovation arm, published a discussion paper in April on the key issues raised by large-scale AI, seeking answers to a handful of questions that are expected to inform the government's approach to AI ethics in Australia.

Conceding that there is no one-size-fits all solution to the range of legal and ethical implications issues related to AI, CSIRO identified nine tools it says could be used to assess risk and ensure compliance and oversight.

These were impact assessments, reviews, risk assessments, best practice guidelines, industry standards, collaboration, monitoring and improvement mechanisms, recourse mechanisms, and consultation.

The Australian National University recently launched a research project to focus on designing Australian values into AI systems. 

The humanising machine intelligence (HMI) project will see 17 core researchers involved in building a design framework for moral machine intelligence (MMI) that can be widely deployed. 

Head of School of Philosophy at the ANU Seth Lazar previously told ZDNet the need to develop moral AI comes off the back of recent concerns about existing AI systems.

"The thing that triggers my concern about AI is there [are] so many ways in which we could use AI for social good but over the last year or two it has become apparent that there are potentially a lot of unintended consequences in which AI could potentially be made for bad reasons," he said. "So there's huge demand and interest for developing AI with moral values."

Related Coverage

Editorial standards