TOKYO, JAPAN--Artificial intelligence (AI) has been widely touted to bring about positive changes to businesses and society as a whole, but key challenges will need to be resolved for its benefits to be truly accessible.
First, there should be easy access to machine learning expertise. Second, "fair and responsible" development needed to be made in society alongside the progress of AI, according to Jeff Dean, Google Senior Fellow in research.
On its part, he, Google had been providing internal courses to arm its employees with machine learning skillsets. Such initiatives had enabled the company to grow the number of Google employees trained in machine learning from fewer than 1,000 in 2012 to more than 18,000 today, said Dean, speaking at a media event here to showcase the company's AI and machine learning technologies.
The company now planned to make its machine learning crash course available online for free to the public early next year, he said.
On the need for fair and responsible development, he noted that there were times when the data on which AI models needed to be trained reflected the world as it was, though, not necessarily the world that societies desired.
Here, as part of efforts to effect change, he said Google was involved in various initiatives such as the Geena Davis Inclusion Quotient (GD-IQ), which looked at gender bias in media. The Hollywood actor had started an institute to collect and examine data on movies, which she hoped could help bring positive change in the industry.
It was, however, tedious for researchers to trawl through years' worth of films one by one in order to log gender-specific patterns. With the GD-IQ initiative, a tool was developed to automatically identify a screen character's gender, how long it spoke for, and how long it was on screen. It was able to cut down what would otherwise take humans months to measure and enable the data to be quantified in rea-time.
As a result, her team was able to analyse the top 100 highest grossing live-action films in the US and determine that men were seen and heard nearly twice as much as women. This was despite the fact that films with female leads did better at the box office, earning 16 percent more than films with male leads.
Davis believed that data would help uncover unconscious bias and convince others that it existed, so something could be done to resolve it.
Noting that the Google recently was found to have tracked users' location even after they had turned off the setting, ZDNet asked if that also meant balancing users' need for privacy against the need to collect data to be fed into machine learning models. While not commenting directly on the incident, Dean said many problems today could be addressed without the need to collect large volumes of data, but instead with improvements in computation.
He added that Google looked to tap consumer data to improve its products, but always gave consumers the power to control their data.
DeepMind, the AI unit of Google's parent company Alphabet, just last month set up an ethics and society research division to "explore and better understand the real-world impacts of AI".
"Technology is not value neutral, and technologists must take responsibility for the ethical and social impact of their work," it said. "At DeepMind, we start from the premise that all AI applications should remain under meaningful human control, and be used for socially beneficial purposes."
Amongst several product highlights and demos featured at the event here, which included Google Assistant and Google Translate, was an initiative in India to help doctors identify diabetic retinopathy. Regression could be detected from regular screening, typically done once annually, during which doctors used images of the patient's eye to rate the severity of the disease.
However, some countries lacked the number of specialists trained to perform such tasks, said Lily Peng, a product manager in Google's medical imaging team. She noted that India, for instance, had a shortage of 127,000 eye doctors and, as a result, 45 percent of patients suffered vision loss before a diagnosis could be made.
Google trained its deep neural network with TensorFlow to perform the task, feeding the model 130,000 images and analyses from 54 doctors, rendering more than 880,000 diagnoses in total. The tests showed diagnoses performed by the machine closely matched those of the human doctors, said Peng, who added that the algorithm currently was being validated at clinical trials and with regulatory assessment in Thailand and India.
Based in Singapore, Eileen Yu reported for ZDNet from Google's Asia-Pacific Made With AI media event in Tokyo, Japan, on the vendor's invitation.