X
Innovation

The state of AI in 2020: Biology and healthcare's AI moment, ethics, predictions, and graph neural networks

Research and industry breakthroughs, ethics, and predictions. This is what AI looks like today, and what it's likely to look like tomorrow.
Written by George Anadiotis, Contributor

The State of AI Report 2020 is a comprehensive report on all things AI. Picking up from where we left off in summarizing key findings, we continue the conversation with authors Nathan Benaich and Ian Hogarth. Benaich is the founder of Air Street Capital and RAAIS, and Hogarth is an AI angel investor and a UCL IIPP visiting professor.

Key themes we covered so far were AI democratization, industrialization, and the way to artificial general intelligence. We continue with healthcare and biology's AI moment, research and application breakthroughs, AI ethics, and predictions.

Biology and healthcare's AI moment

A key point discussed with Benaich and Hogarth was the democratization of AI: What it means, whether it applies, and how to compete against behemoths who have the resources it takes to train huge machine learning models at scale.

One of the ideas examined in the report is to take pre-existing models and fine-tune them to specific domains. Benaich noted that taking a large model, or a pre-trained model in one field, and moving it to another field can work to bootstrap performance to a higher level:

"As far as biology and healthcare are becoming increasingly digital domains with lots of imaging, whether that relates to healthcare conditions or what cells look like when they're diseased, compiling data sets to describe that and then using transfer learning from ImageNet into those domains has yielded much better performance than starting from scratch."

This, Benaich went on to add, plays into one of the dominant themes in the report: Biology -- in which Benaich has a background -- and healthcare have their AI moment. There are examples of startups at the cutting edge of R&D moving to production tackling problems in biology. An application area Benaich highlighted was drug screening:

"If I have a software product, I can generate lots of potential drugs that could work against the disease protein that I'm interested in targeting. How do I know out of the thousands or hundreds of thousands of possible drugs, which one will work? And assuming I can figure out which one might work, how do I know if I can actually make it?"

Beyond computer vision, Benaich went on to add, there are several examples of AI language models being useful in protein engineering or in understanding DNA, "essentially treating a sequence of amino acids that encode proteins or DNA as just another form of language, a form of strings that language models can interpret just in the same way they can interpret characters that spell out words."

fda-ml.png

The FDA published a new proposal to embrace the highly iterative and adaptive nature of AI systems in what they call a "total product lifecycle" regulatory approach built on good machine learning practices.

Transformer-based language models such as GPT3 have also been applied to tasks such as completing images or converting code between different programming languages. Benaich and Hogarth note that the transformer's ability to generalize is remarkable, but at the same time offer a word of warning in the example of code: No expert knowledge required, but no guarantees that the model didn't memorize the functions either.

This discussion was triggered by the question -- posed by some researchers -- whether progress in mature areas of machine learning is stagnant. In our view, the fact that COVID19 has dominated 2020 is also reflected in the impact it has had on AI. And there are examples of how AI has been applied in biology and healthcare to tackle COVID19.

Benaich used examples from biology and healthcare to establish that beyond research, the application area is far from stagnant. The report includes work in this area ranging from startups such as InVivo and Recursion to Google Health, DeepMind, and the NHS.

What's more, the US Medicaid and Medicare system has approved a medical imaging product for stroke that's based on AI. Despite pre-existing FDA approvals for deep-learning based medical imaging, whether that's for stroke, mammography, or broken bones, this is the only so far that has actually gotten reimbursement, noted Benaich:

"Many people in the field feel that reimbursement is the critical moment. That's the economic incentive for doctors to prescribe, because they get paid back. So we think that's a major event. A lot of work to be done, of course, to scale this and to make sure that more patients are eligible for that reimbursement, but still major nonetheless."

Interestingly, the FDA has also published a new proposal to embrace the highly iterative and adaptive nature of AI systems in what they call a "total product lifecycle" regulatory approach built on good machine learning practices.

Graph neural networks: Getting three-dimensional

The report also includes a number of examples that Benaich stated: "Prove that the large pharma companies are actually getting value from working with their first drug discovery companies." This discussion naturally leads to the topic of progress in a specific area of machine learning: graph neural networks.

The connection was how graph neural networks (GNNs) are used to enhance chemical property prediction and guide antibiotic drug screening, leading to new drugs in vivo. Most deep learning methods focus on learning from two-dimensional input data. That is, data represented as matrices. GNNs are an emerging family of methods that are designed to process 3D data. This may sound cryptic, but it's a big deal. The reason is that it enables more information to be processed by the neural network.

"I think it comes down to one topic, which is the right representation of biological data that actually expresses all the complexity and the physics and chemistry and living nuances of a biological system into a compact, easy to describe mathematical representation that a machine learning model can do something with," said Benaich.

Sometimes it's hard to conceptualize biological systems as a matrix, so it could very well be that we're just not exploiting all of that implicit information that resides in a biological system, he went on to add. This is why graphical representations are an interesting next step -- because it feels intuitive as a tool to represent something that is connected, such as a chemical molecule.

1-cwug-dzbqnonjxuxqdlrfq.png

Graph neural networks enable the representation of 3-dimensional structures for deep learning. This mean being able to capture, and use, more information, and lends itself well to the field of biology. Image: M. Bronstein

Benaich noted examples in molecule property prediction and chemical synthesis planning, but also in trying to identify novel small molecules. Small molecules are treated as Lego building blocks. By using advances in DNA sequencing, all of these chemicals are mixed in a tube with a target molecule, and researchers can see what building blocks have assembled and bound to the target of interest.

When candidate small molecules that seem to work have been identified, GNNs can be used to try and learn what commonalities these building blocks have that make them good binders for the target of interest. Adding this machine learning layer to a standard and well-understood chemical screening approach gives a several-fold improvement on the baseline.

Hogarth on his part mentioned a recent analysis, arguing that GNNs, the transformer architecture, and attention-based methods used in language models share the same underlying logic, as you can think of sentences for the connected word graphs. Hogarth noted the way that the transform architecture is creeping into lots of unusual use cases, and how scaling it up is increasing the impact:

"The meta point around the neural networks and these attention-based methods, in general, is that they seem to represent a sort of a general enough approach that there's going to be progress just by continuing to hammer very hard on that nail for the next two years. And one of the ways in which I'm challenging myself is to assume that we might see a lot more progress just by doing the same thing with more aggression for a bit.

And so I would assume that some of the gains that have been found in these GNNs cross-pollinate with the work that's happening with language models and transformers. And that approach continues to be a very fertile area for the kind of super general, high-level AGI-like research."

AI ethics and predictions

There's a ton of topics we could pick to dissect from Benaich and Hogarth's work, such as the use of PyTorch overtaking TensorFlow in research, the boom in federated learning, the analysis on talent and retainment per geography, progress (or lack thereof) in autonomous vehicles, AI chips, and AutoML. We encourage readers to dive into the report to learn more. But we wrap up with something different.

Hogarth mentioned that the speculation phase in AI for biology and healthcare is starting, with lots of capital flowing. There are going to be some really amazing companies that come out of it, and we will start to see a real deployment phase kick in. But it's equally certain, he went on to add, there are going to be instances that will be revealed to be total frauds.

So, what about AI ethics? Benaich and Hogarth cite work by pioneers in the field, touching upon issues such as commercial gender classification, unregulated police facial recognition, the ethics of algorithms, and regulating robots. For the most part, the report focuses on facial recognition. Facial recognition is widespread the world over and has lead to controversy, as well as wrongful arrests. More thoughtful approaches seem to gather steam, Benaich and Hogarth note.

The duo's report cites examples such as Microsoft deleting its database of 10 million faces (the largest available) collected without consent, Amazon announced a one-year pause on letting the police use its facial recognition tool Rekognition to give "congress enough time to put in place appropriate rules." And IBM announced it would sunset its general-purpose facial recognition products.

Hogarth referred to an incident in which a UK citizen claimed his human rights were breached when he was photographed while Christmas shopping. Although judges ruled against the claimant, they also established an important new duty for the police to make sure that discrimination is proactively "eliminated." This means that action on bias cannot be legally deferred until the tech has matured:

"This creates a much higher bar to deploying this software. And it creates almost a legal opportunity for anyone who experiences bias at the hands of an algorithm to have a foundation for suing the government or a private act of defiance technology," Hogarth said.

megapixels-cc-harvey-laplace.jpg

AI ethics often focuses on facial recognition, but there are more and more domains it's becoming relevant in.

MegaPixels: Origins, Ethics, and Privacy Implications of Publicly Available Face Recognition Image Datasets. © Adam Harvey / MegaPixels.cc

Hogarth also emphasized another approach, which he termed "API driven auditability." He referred to a new law passed in Washington State with active support from Microsoft. This law restricts law enforcement's use of facial recognition technology, by demanding that the software used must be accessible to an independent third party via an API to assess for "accuracy and unfair performance differences" across characteristics like race or gender.

Of course, even narrowing our focus on AI ethics, the list is endless: From bias to the use of technology in authoritarian regimes and/or for military purposes, AI nationalism, or the US tax code incentivizing replacing humans with robots, there's no shortage of causes for concern. Benaich and Hogarth, on their part, close their report by offering a number of predictions for the coming year:

The race to build larger language models continues and we see the first 10 trillion parameter model. Attention-based neural networks move from NLP to computer vision in achieving the state of the art results. A major corporate AI lab shuts down as its parent company changes strategy. In response to US DoD activity and investment in US-based military AI startups, a wave of Chinese and European defense-focused AI startups collectively raise over $100 million in the next 12 months.

One of the leading AI-first drug discovery startups (e.g. Recursion, Exscientia) either IPOs or is acquired for over $1 billion. DeepMind makes a major breakthrough in structural biology and drug discovery beyond AlphaFold. Facebook makes a major breakthrough in augmented and virtual reality with 3D computer vision. And NVIDIA does not end up completing its acquisition of Arm.

The record for predictions offered in last year's State of AI Report was pretty good - they made 5 out of 6. Let's see how this year's set of predictions fares.

Editorial standards