Google AI researchers find strange new reason to play Jeopardy!
Scientists at Google's AI unit tested a deep neural network on clues from the popular gameshow Jeopardy!. But unlike IBM's Watson triumph, this was less about the answers and more about the strange way the computer reformulates the question.
Google scientists have found another use for Jeopardy! questions, having little to do with understanding human speech and more about how computers communicate with one another.
And this week, they've made that work an open-source software tool available on GitHub to anyone using Google's TensorFlow framework for machine learning.
"Active Question Answering," or Active QA, as the TensorFlow package is called, will reformulate a given English-language question into multiple different re-wordings, and find the variant that does best at retrieving an answer from a database.
The system was developed by feeding Jeopardy! clues into a "reinforcement learning" neural network. The network got better and better at re-wording questions as it was rewarded for successfully retrieving the right answer.
Google AI authors, in the blog post on the project, note that their famous corporate mission is to "organize the world's information." In keeping with that, they "envision that this research will help us design systems that provide better and more interpretable answers, and hope it will help others develop systems that can interact with the world using natural language."
In the original paper, Ask The Right Questions: Active Question Reformulation With Reinforcement Learning, presented this past spring at the International Conference on Learning Representations, Google AI researchers Christian Buck, Jannis Bulian, Massimiliano Ciaramita, Wojciech Gajewski, Andrea Gesmundo, Neil Houlsby, and Wei Wang built upon principles of machine translation. They interpreted the task of training a computer to reformulate clues from Jeopardy! as being akin to foreign language translation. The goal was to paraphrase the Jeopardy! clues in a syntax that improves querying of a database.
For example, given a clue like "Gandhi was deeply influenced by this count who wrote 'War and Peace'," the neural network had to learn to put that clue into the form of a question that would produce the correct answer, which is Leo Tolstoy. (The Jeopardy! questions were gotten from a 2017 project, called SearchQA, built by researchers at New York University and Carnegie Mellon. Their project was, in turn, taken by crawling the ebsite "J! Archive," a fan site for the show.)
The Active QA package includes the a customized version of Google's TensorFlow code for machine translation. It's based on Google research in 2014 on what's called "sequence to sequence" neural networks for translating between, say, English and French.
The code package also includes a so-called question answering system, the actual database that retrieves the answers put to it by Active QA. This is based on a deep learning system developed in 2017 by researchers at the Allen Institute for Artificial Intelligence, and the University of Washington, for answering questions, called "BiDaf."
What's most significant, perhaps, in the paper and in this new toolkit, is that the deep neural network is not learning how to come up with well-phrased natural-language speech, nor is it learning much about asking questions in the typical sense that humans mean it. It's not like The Washington Post's robot journalist, impersonating human writing.
Rather, Active QA is learning tricks that improve how to search a database, and the results often sound like gibberish to a human ear. For example, the authors note that the above clue about Ghandi ("Gandhi was deeply influenced by this count who wrote 'War and Peace'") was reformulated by Active QA as "What is name gandhi gandhi influence wrote peace peace?"
In another instance, the original Jeopardy! clue, "During the Tertiary Period, India plowed into Eurasia & this highest mountain range was formed," was refashioned as "What is name were tertiary period in india plowed eurasia?" Which succeeded in returning the correct answer: Himalayias. Numerous examples, many having the same weird patterns of awkward grammar and repeated words, are offered in the appendix at the back of the paper.
While it's doggerel as far as natural language, the authors see the computer-constructed phrases as a real advance in query skills. The Active QA neural net wasn't just slightly modifying the original clues, it was actually discovering on its own techniques that have long been around in the science of information retrieval, things such as "stemming," where a verb, say, is changed from its conjugated form to its root form.
"Sometimes," they write, "AQA learns to generate semantically nonsensical, novel, surface term variants; e.g., it might transform the adjective dense to densey." The "only justification for this," they conclude, is that it does a good job "exploiting" the way the BiDaf database has encoded the answers.
As the authors put it, "It seems quite remarkable then that AQA is able to learn non-trivial reformulation policies ... One can think of the policy as a language for formulating questions that the agent has developed while engaging in a machine-machine communication with the environment."
The day may not be far off when bots will do more of the Googling than people.
Google Assistant's big update: All the new AI tricks and features, explained