To make both studies comparative, it used and asked the same questions as it did in the 2017 study.
It asked 4,952 different questions to each of the five contestants, and noted many different possible categories of answers.
These responses included whether the assistant answered verbally, and whether an answer was received from a database (like the Knowledge Graph).
It checked if an answer was sourced from a third-party source ("According to Wikipedia ..."), how often the assistant did not understand the query, and when the device tried to respond to the query but simply got it wrong.
The 2018 study showed that Google Assistant has maintained its No. 1 position across both platforms -- smartphone and Google Home -- by attempting to answer and correctly answering the most queries.
Alexa showed the largest year-over-year improvement. This year, it attempted to answer 2.7x more queries than it did last year, rising from 19.8 percent to 53.0 percent of questions it attempted to answer.
Microsoft's Cortana Invoke saw the second largest increase in attempted answers, going from 53.9 percent to 64.5 percent. This was followed by Siri who increased its score from 31.4 percent to 40.7 percent.
Although most devices were consistent with the 2017 study in where they sourced the information for their answers, Siri showed the largest shift. It had 23 percent of its results that were sourced from third-party information -- also called featured snippets.
Every competing personal assistant has made significant progress since 2017 in closing the gap with Google's Assistant.
For accuracy in answering questions, the only assistant that increased its accuracy year over year was Cortana Invoke, which went from 86.0 percent to 92.1 percent of questions answered correctly.
Virtual assistants are becoming ubiquitous in our work and home lives. Many of us own at least one personal assistant -- either Siri on iPhone or Cortana on Windows 10 -- but which one is better and more accurate at responding to our requests?