X
Innovation

AI in sixty seconds

There are no glowing brains, nor consciousness.
Written by Tiernan Ray, Senior Contributing Writer

To deliver you from any misconceptions, the following contains a brief explanation of artificial intelligence as it is practiced today.(1)

Today's AI consists of software programs that are referred to as deep learning.(2)

Deep learning programs transform input into output. All software programs do that, but the magic of deep learning is that the mathematical function that does the transformation is not written in advance by the computer programmer. Instead, it takes shape spontaneously, as the program is exposed to data.(3)

The input could be digital images of cats and dogs, and the output could be a numerical score, a one or a zero, classifying each picture as either a cat or a dog. The task could be more sophisticated, such as a breast ultrasound image that is transformed into a new image that highlights areas of tissue suspected to have cancer.(4)

No matter the input, a mathematical function will be automatically discovered that will transform it into the desired output.(5)

In this way, deep learning is a transformation machine, a machine to automate transformations far beyond what a human programmer could code.(6)

That is all that AI is at the moment. There is no consciousness, there are no glowing brains.(7)  Whether such a machine is "intelligent" is open to debate.(8)

Also: Ethics of AI: Benefits and risks of artificial intelligence

FOOTNOTES: 

  1. The phrase artificial intelligence doesn't have any fixed meaning, for it was coined by Dartmouth professor John McCarthy in 1955 as a placeholder in a grant application. The best summary so far is by MIT's Marvin Minsky, a colleague of McCarthy's, who said the term stands for whatever is at the cutting edge of computer science. Commercial entities, such as software makers, will often use the term to mean whatever they want, simply to sound impressive by gaining the imprimatur of having "AI."
  2. Deep learning is a subset of a wider field of AI software called machine learning. Some believe AI has to have an element of learning because all intelligent entities exhibit an ability to learn. This notion of learning goes back to the earliest days of AI, in the 1940s, when neuropsychologist Warren McCulloch and logician Walter Pitts proposed that thought was the development of synaptic connections between neurons in the brain. That, they hypothesized, could be represented by artificial neurons whose synaptic connections change over time, with the change in connections representing learning. Psychologist Donald Hebb formalized this as a "learning rule" in 1949, in which "neurons that fire together, wire together," a form of reinforcement underlying thought. The term deep learning has emerged a bunch of times over the decades, and it has been used in different ways. The usage has never been rigorous, and doubtless it will morph again, and at some point it may lose its utility. 
  3. The function that takes shape transforms the input by multiplying each piece of input data by some factor. Multiplying an input in this way is called weighting the input, to give each piece of input a greater value in the final output. It is similar to the way different stocks have different weights in the S&P 500: Apple counts for more than smaller companies. The weights are what are known as "learnable parameters," which means that the weights themselves change repeatedly as the program is exposed to data, a kind of recursive, or self-reflexive quality of deep learning. In order for the weights to reach the right values, the output of a deep learning program is repeatedly compared to a desired output, a target, and different weight values are tried out, until the output comes close enough to what's desired. The change in parameters in this way is done automatically, by code called backpropagation that employs the calculation of derivatives to shift the values of weights up or down in a semi-random fashion known as stochastic gradient descent. Stochasticity can be thought of as a synonym for randomness. The basic idea is to try weight values till the formula of the deep learning program achieves the correct transformation of input to achieve the target output. 
  4. This is an actual application introduced by NYU researchers in April. A convolutional neural network was input with tens of thousands of ultrasound images of breasts. The images were simply labeled as having cancer or not, based on pathology reports that had been performed. The convolutional neural network automatically used the labels as a target output, to find the right weights to transform each ultrasound into a new image, called a saliency map. The map highlighted areas of tissue suspected of having cancer. This was an automatic transformation, an augmentation of the pixels of the image to make some parts of the image more visually prominent. The same program was able to automatically produce a probability score of the likelihood of cancer on new images.
  5. In so-called generative deep learning, the output will be a more complex object, such as a sentence produced from the example input sentences, or a fake picture composed by assembling the various example input pictures. For example, a program could take a text string typed by a human as input, such as "tell me a story," and produce a block of text as output, such as "Once upon a time..." That is one of the achievements of OpenAI's GPT-3 neural network program for natural language processing. The ability of the program to generate appropriate output gives such programs the air of human intelligence. What is actually happening is that a numeric score of word relatedness from the input is being transformed into a score for word relatedness that produces the output. 
  6. Early digital computers were referred to as a "mill," an analogy to the old grinding mill in a European town. People in old European towns would take their wheat to the mill and put it in to have it ground up to produce flour. Wheat into flour was a transformation. A computer program is like a mill, a machine that converts not physical matter but streams of ones and zeros into new streams of ones and zeros, the output, a different kind of transformation. AI, in the form of deep learning, is like a mill that can change its gears to produce new kinds of output as each new input is introduced into it. Many more details of the working of AI as a machine, including the importance of data and bias, can be found in the companion article, Ethics of AI: Benefits and risks of artificial intelligence.
  7. Notions in the popular imagination, propelled by popular media, such as humanoid robots, are the human predilection for anthropomorphizing. They are fantasies.
  8. A machine that passes tests of intelligence such as the Turing Test might still not convince many people it is intelligent, as MIT's Scott Aaronson has discussed. Perhaps more important, as noted in Note (2), the focus from AI's early days was on intelligence as embodied in human thought. However, numerous scholars have explored the possibility of intelligence taking many other forms. For example, Michael Levin of Tufts University has explored the ways in which parts of living organisms other than brains do things such as compute. As Levin has written, "many biological phenomena, ranging from maze solving by cells and slime molds to complex regulative morphogenesis and regeneration, can be viewed as processes involving information-processing and decision-making, in the absence of a brain."
Editorial standards