Singapore C2C marketplace turns to AI to combat fraud, improve UX

Carousell is looking to use artificial intelligence and machine learning across the organisation, tapping the technology to mitigate fraud risks and enhance user experience.

Carousell believes the focus on artificial intelligence (AI) needs to move past the hype and on how companies can actually adopt it to gain real business benefits.

In particular, the Singapore-based consumer-to-consumer (C2C) online marketplace was looking to machine learning and AI to combat fraud as well as improve user experience.

Carousell CTO and Co-Founder Lucas Ngoo said establishing user trust was critical for the site, on which buyers would purchase goods from individual sellers they did not personally know or were not backed by big brand names.

The company began exploring the use of machine learning less than a year go, tapping TensorFlow and Google's Cloud Machine Learning engine to identify and flag potential fraud risk. For example, the software would be able to highlight an individual who sent out multiple requests to different Carousell users, asking them to leave the site's chat platform to communicate.

This information then would be sent to the company's trust and safety team to be reviewed and to take the necessary action, for example, suspending the individual's account.

"It won't replace humans, but it augments human judgement...and helps with prediction that previously weren't possible. At the end of day, the human is still needed to make the final call based on these predictions," Ngoo said during a panel discussion at its office.

"We see machine learning [eventually] being integrated into every part of the business, including user experience and security," he said, adding that the company would continue to explore other ways to tap machine learning.

According to Nggo, Carousell currently had a fraud rate of 0.05 percent.

Oyvind Roti, Google's Asia-Pacific Japan head of solutions architect for cloud, noted that fraud detection processes in the past involved large numbers of manually set rules and employees would have to painstakingly look through all of these to determine if a transaction was fraudulent.

The emergence of AI and machine learning helped automate a lot of these repeatable processes, said Roti, who also was on the panel. Machine learning not only cut down the time needed to review and identify potential risks, it also could pick up on new tactics hackers adopted over time to circumvent these rules. This would enable companies to keep up with cybercriminals.

He also stressed, though, that humans still were needed to jump in and make the final decision and take the necessary action.

Chris Auld, Microsoft's Southeast Asia principal technical evangelist manager, concurred, noting that it was difficult to imbue machines with values and morals. This underscored the need for humans to make that value judegement, Auld said.

And he would know, since Microsoft last year had to shut down its AI chatbot, Tay.ai, after just 16 hours when it began picking up inflammatory and racist opinions on Twitter.

Acknowledging the botched experiment, Auld said this experience underscored the need to imbue human values, especially as machine learning tools learnt from what they saw online, including undesirable human behaviour.

He also stressed the need for the IT vendor community to tread with caution in driving machine learning and AI, or risk having governments bear down on the industry with regulations.

ZDNet then asked how that would impact the need to balance access to more user data, to feed machine learning models, and users' demand for privacy. Google, in particular, this week admitted it continued to track Android users' location even when the setting was disabled.

While he declined to comment specifically on the matter since it was out of his focus area, Roti said Google, with regards to machine learning, always stated upfront that it had no ownership of the data used to feed these models. Its enterprise customers used and owned their data, he said.

He further pointed to another development in machine learning that did not depend on the use of data on which to train. AlphaGo Zero was able to master the game without prior human knowledge and based only on its knowledge of the game's rules and by playing completely random Go games against itself. In three days, it defeated its previous iteration, AlphaGo, 100 games to 0.

Newsletters

You have been successfully signed up. To sign up for more newsletters or to manage your account, visit the Newsletter Subscription Center.
See All
See All