To give Dreamforce attendees a more forward-looking glimpse into its product capabilities, the Salesforce Research team demonstrated some of its breakthroughs in areas like conversational AI and natural language generation. Their research is focused on building an AI-driven world so far only found in sci-fi, said Salesforce Chief Scientist Dr. Richard Socher.
"We're focused on trying to predict this future, and the best way to predict is to create," Socher said.
Salesforce's Research division comprises four subgroups: fundamental research, applied research, new product incubation and AI platform research. This is a change from just a couple years ago, Socher said, when they were focused only on fundamental AI research. The fundamental research and applied research efforts create a "virtuous cycle," he said, of demonstrating what's possible and what's needed.
For instance, Salesforce Senior Research Scientist Victoria Lin demonstrated querying a relational database using a voice assistant.
"Lots of enterprise data exists in relational databases," she noted. Getting insights from that data, she said, "typically requires me to know a query language such as SQL... or having a team of analysts run a report for me."
Lin's research focuses on training a neural network that interprets natural language queries and translates them to SQL queries, end to end.
"This is a huge step towards democratization of data," Lin said, and a move toward "the end of data entry and the beginning of data conversation."
Next, Salesforce Senior Research Director Caiming Xiong gave what he said was the first-ever live demo of a fully-autonomous conversational AI agent in front of a large audience. "The future of AI is conversational," he said.
In the demo -- in which an AI agent helps Xiong modify his pizza order -- the agent can handle human authentication (via email address verification), handle interruptions and process multiple tasks. With this kind of fully-conversational agent, Xiong said, customers would never have to wait for a human call center agent to assist them.
"The entire system works today," he said. "In the future it will be even more seamless, faster, with more capabilities."
The AI agent relied on speech recognition, as well as a natural language understanding engine to analyze feelings, intent and key information. Then, a dialogue management engine determined the next best action and generated a response.
Next, Salesforce Senior Research Scientist Nitish Keskar demonstrated the latest in natural language generation -- a technology already used for tools like email autocomplete, grammar correction and customer service bots. However, Keskar noted, these use cases "typically happen at sentence or snippet level."
Salesforce Research, he said, is working on "the ambitious goal of long-form text generation."
"We're not generating the next sentence," he said, "we're generating the whole article."
To that end, Keskar unveiled CTRL, the world's largest open-source language model. It has an unprecedented 1.63 billion parameters and has been trained on 143 GB of text, including millions of documents, thousands of books and all of Wikipedia. It's trained to predict the next word given a history of a few words and can generate as much as 500 words into the future.
"It's easy and explicit to communicate to the model what your intention is," Keskar said.
He gave a live demo of the model, using it to write a full press release for a product based on just a few words of seed text. A user can start by choosing a "genre" of document to write -- such as a press release, humor, legal, or the genre of "Wikipedia" to generate explanatory text. After giving the tool some seed text, the user just hits tab to trigger a completion. "It knows what the most plausible next sentence is," Keskar said.
Meanwhile, he said, Salesforce is "actively working with partners to ensure we promote understanding of how this model can and should be used and mitigate any negative consequences."
Lastly, Salesforce research scientist Nazneen Rajani demonstrated how Salesforce Research is working on making AI systems more interpretable and, hence, more trustworthy.
Explainability is crucial for us humans to have meaningful interactions," she said, "to understand AI safety and take appropriate actions."
Rajani demonstrated how, when giving a prediction or recommendation, an AI model can be built to explain its reasoning.
Socher added that "trust is our number one value."
Speaking with ZDNet, Salesforce Einstein VP of Products Marco Casalaina echoed that notion, stressing that Salesforce doesn't use customer data to build or train global AI models. Salesforce built TransmogrifAI, an end-to-end automated machine learning library for structured data, to help customers build customized machine learning models. The Einstein platform now powers around 10 billion predictions a day for Salesforce customers, Casalaina said.
"Underlying that is the fact that for each customer, they have their own individual predictions," he said. "When they build these models, it's built solely on their data and for them."