Back in 2015, information channels were awash with reports of the human attention span having plummeted to 8.25 seconds for a basic task. Even more ignominious was one detail in the report: The human ability to focus registered lower than a goldfish's attention span, which came in at 9 seconds.
You could almost hear an audible sigh of relief when it turned out that none of this was true. (Goldfish apparently do have memories and can learn!)
The seed of this very believable myth was apparently sowed in a Microsoft report. And yet, if you strip away the sensational piece of information that wormed itself into innumerable news reports, the underlying facts have turned out to be alarmingly on point.
Our attention spans are shrinking.
Primed to forget
Dr. Gloria Mark, a professor of informatics at the University of California, Irvine, who studies how digital media affects our lives, has written a book validating the supposition that our ability to focus is in peril.
"In 2004, we measured the average attention on a screen to be two and a half minutes," Mark writes. "Some years later, we found attention spans to be about 75 seconds. Now we find people can only pay attention to one screen for an average of 47 seconds," she says.
From 150 seconds to a 47-second average attention span in just 10 years is a stunning decline that could have profound effects on our species.
Many scientists point to a condition called cognitive decline that is taking place among us humans as we become more and more reliant on technology to do things that we used to use our brains to do for ourselves.
When you stop your brain from doing difficult things like reading maps to get to your destination versus using a GPS like Google Maps to guide you there, you are putting the brakes on the very things that helped your species evolve.
It becomes a vicious feedback loop. "Once you stop using your memory it will get worse, which makes you use your devices even more," says Professor Oliver Hardt, who studies the neurobiology of memory and forgetting at McGill University in Montreal.
"We can predict that prolonged use of GPS likely will reduce grey matter density in the hippocampus….GPS-based navigational systems don't require you to form a complex geographic map. Instead, they just tell you orientations, like 'turn left at next light,'" says Hardt.
Listening to a map app does not engage the hippocampus very much, unlike being forced to engage with a physical map, which requires a much bigger and more complex cognitive process. And it is having disastrous consequences for us.
Hardt points to the brain mapping of people who have been using GPS for a very long time as conclusive proof -- they show distinct impairments in spatial memory abilities that require the hippocampus.
It's not altogether surprising, then, that the average American spends 2.5 days a year looking for things, and costing themselves $2.7 billion in replacement costs. Out of this cohort, millennials are slated to be twice as likely as boomers to misplace their things.
Many other measured indications have already signaled something terrible at work; for example, the average person picks up their phone more than 1,500 times per week. In fact, the mere presence of a smartphone decreases cognitive ability.
Hardt doesn't have the data yet, but he believes that "the cost of this might be an enormous increase in dementia."
What does this all mean for our society? We must prepare for a future in which we're unable to do the most basic things -- whether we have dementia or not. In other words, we will have to find a way to survive and exist in an era of potentially chronic memory impairment.
Within that context are two remarkably ingenious and relatively cheap solutions that have been engineered to navigate the world of the memory-impaired.
A research team from the University of Waterloo in Ontario, Canada has designed a companion robot with an episodic memory of its own that, while initially made with dementia patients in mind, is being lauded for its ability to help anyone who is habitually forgetful.
If our current problems with memory become any worse, such companion robots could become as prominent a fixture in homes as coffee machines or Roomba vacuums.
The Waterloo research team started off with a Fetch mobile manipulator robot, which sports a built-in camera and a convolutional neural network. Using deep learning and an object-detection algorithm, researchers taught the robot to identify, track, and keep a memory log of objects in the room. This was being stored in video with the ability to record the time and date that objects entered or left the robot's field of vision.
Researchers also developed a graphical interface to enable users to choose objects they want to be tracked.
In other words, if you forgot where you last left your phone in your living room, you would simply type the missing object's name in that interface, and the object's location would be revealed to you.
Lead researcher Ali Ayub, a post-doctoral fellow in electrical and computer engineering at the university, says that the tracking accuracy is currently around 90 percent but improving.
"The long-term impact of this is really exciting," said Dr. Ayub. "A user can be involved not just with a companion robot but a personalized companion robot that can give them more independence."
Tags with memories
Researchers at MIT grappled with the same problem of impaired memory but employed a different approach. They used an RFusion robotic arm with a built-in camera and antenna that work together to spot and extricate items that could be completely hidden from view by a pile of stuff.
The thing that makes this solution equally attractive is its simplicity -- items have RFID tags -- which are batteryless and dirt cheap -- pasted onto them. (Radio Frequency Identification -- aka RFID -- is essentially a wireless system made up of both tags that can emit signals, and readers that have antennas that can receive and interpret them.)
No matter where the objects are and how many things are piled over them, MIT's RFusion system can haul them out (as long as they are within the zone that the robot arm operates in).
Using machine learning, the robotic arm -- which is integrated with the camera and antenna -- hones in on the location.
Once it is able to target the right space, the neural network kicks in by meshing information from the RF tags as well as AI-driven visual cues from the camera to clear the area of other objects that may lie on top of the one it is searching for.
The key process here is one called reinforcement learning -- where the algorithm is taught to get the robotic arm to its destination in as few moves as possible through a computational reward system.
After adjusting the gripper's angle and width, and verifying the accuracy of the RF tag one last time, the robotic arm picks up the object and places it to one side.
It's not only the memory-impaired universe that could benefit from this elegant solution.
Thanks to AI algorithms and RFID tags, a whole new cost-effective solution emerges that could navigate piles of orders in warehouses and fulfill them, or work in any industrial setup that requires navigating supply chain complexities.
"This idea of being able to find items in a chaotic world is an open problem that we've been working on for a few years," said Fadel Adib, associate professor in the Department of Electrical Engineering and Computer Science at MIT.
"Having robots that are able to search for things under a pile is a growing need in industry today," he added.