There's a paper in the journal PLoS Computational Biology that is incredibly significant to folks thinking through the intersection of human-computer interaction and learning or entertainment. Published by Max Lungarella and Olaf Sporns, Mapping Information Flow in Sensimotor Networks explains the relationship between the body (the system of sensory perception that interacts with the world) and the integration of that information into what we call "knowledge" and "experience." Why is that important? Well, in a nutshell, it reshapes the idea that the main challenge is getting input into a CPU where it can be analyzed, rather the sensory system itself is part of the experience of learning and can change it.
We find that information structure and information flow in sensorimotor networks (a) is spatially and temporally specific; (b) can be affected by learning, and (c) can be affected by changes in body morphology. Our results suggest a fundamental link between physical embeddedness and information, highlighting the effects of embodied interactions on internal (neural) information processing, and illuminating the role of various system components on the generation of behavior.
The important feature of this approach is that it is "model-free," assembling meaning from a combination of current experience, the state of the "body" of sensors, and past experience. This is George Lakoff's notion of the embodied mind, in which our brain relies on metaphorical connections between experiences ("up is better" or "down is worse") to map reality. We, and machines now, apparently, interact with the world as a system, a "whole" made up of parts that, as the various parts change makes our experience different.
Think, for instance, of the change experienced by a person suddenly struck blind; their body readjusts to the world. A machine's relationship to the world may be changed by a new three-button trackball, but the computer operating system it were designed to recognize the fact new dimensions of input were available, it might respond to the user in a different way instead of just providing another way to mouse up or down.
We generally think of plugging new input/output devices into a computer and turning it on to get something new. But with this model, the previously stored data is changed by the new parts, too. Instead of replacing one computer or I/O or algorithm with another, we ought to be thinking about how to preserve the record of the previous machine experience for integration with new data collected by the changed machine.
This has important implications for human learning and the design of machine augmentation for human intelligence. It suggests that we need to raise ourselves and our computational environment rather than just think in terms of uptime and downtime, on/off and input/output when we design.
A Segway with a visual system won't become a blind user's legs until it is linked to the perception and intuition of the rider, because the whole system, rider and scooter, have to act together. Just plugging in new inputs does little to improve user experience if it is poorly integrated.
We knew one Web service needn't be the sole source of input for an application. We need to think more about how they change one another when they meet and combine. When we design to keep people in our site, we stunt something in the social network we intend to build. Add video to a site that was primarily text-based or audio-based before and you get something new, not just an upgrade, because a different kind of shared experience is flowing through the server.
There's a lot to think about in this paper.