The model, developed by the Google Brain Team and based on Google's TensorFlow software library, produced several passable news headlines using extracts from articles.
The software turned "metro-goldwyn-mayer reported a third-quarter net loss of dlrs 16 million due mainly to the effect of accounting rules adopted this year" into the headline "mgm reports 16 million net loss on higher revenue".
In another example, it summarised: "australian wine exports hit a record 52.1 million liters worth 260 million dollars (143 million us) in september, the government statistics office reported on monday" as "australian wine exports hit record high in september".
"We've observed that due to the nature of news headlines, the model can generate good headlines from reading just a few sentences from the beginning of the article," said Peter Liu, a software engineer from the Google Brain Team.
Fortunately for human reporters and sub-editors, headline writing can't be completely automated, at least for now.
Liu notes that the team created a "nice proof of concept", but that its method struggles when confronted with a summarisation task that necessitates reading an entire document.
"In those tasks, training from scratch with this model architecture does not do as well as some other techniques we're researching, but it serves as a baseline," he wrote.
To spur progress in this field, Google has open-sourced the models and published them on GitHub for others to use.
While its results are based on a model trained on multi-GPU and multi-machine systems, the code it released was simplified to run on one machine.
Google trained its models using data from the Annotated English Gigaword, a dataset developed at John Hopkins University which consists of about four billion words from 10 million news articles written by various English language newswire services. It's the same dataset that IBM's Watson researchers have used for similar research.
TensorFlow is the technology Google open-sourced last year, which powers a range of the company's services including Smart Reply in its email app, Inbox, and search in Google Photos.
To emulate how humans summarise text, Google used a similar deep-learning model to the one that powers Smart Reply, called sequence-to-sequence learning, which has been put to use in video captioning, speech recognition, and machine translation.
As Liu explains, the two approaches to summarisation are extractive and abstractive. The former takes words from a given piece of text and joins them together to create a summary, but this can produce clumsy results. The latter is what Google aimed to achieve and is how humans summarise text, allowing for rephrasing and the use of words that do not appear in the original text.