Comprehension and Retention

Comprehension and Retention

Summary: This is important because a difference in either, or both, comprehension and retention, would mean that a wide range of significant endeavours, from stock market and related financial decision making to public education, have been more affected by the transition from paper to screen than we previously understood.

TOPICS: Software

I need your help in doing a simple experiment - and yes, I'll report the results here.

The purpose of the thing is to see whether there are grounds for believing that a more controlled experiment would reveal systemic differences in information comprehension and retention depending on whether that information is conveyed on paper or on screen.

This is important because a difference in either, or both, comprehension and retention, would mean that a wide range of significant endeavours, from stock market and related financial decision making to public education, have been more affected by the transition from paper to screen than we previously understood.

What I'd like you to do, is to recruit a few other people, have them read a document, and then answer some on-line questions about it.

The structure is simple:


  1. The document has about 3,000 words - so please allow your subjects about 12 minutes to read it carefully.


  2. about equal numbers of people should be asked to answer the questions immediately after reading the document; others about 24 hours later.


  3. about equal numbers of people should be asked to read the document on paper, others using a PDF reader of their (or your) choice.

I have set up a phpsurveyor page to ask the questions and collect responses so all respondents should get the same questions in the same way.

Please be aware that we want to end up with roughly equal numbers of people in all four categories, but that most people will find it easier to provide on-screen/immediate responses than on-paper/delayed ones. In other words, please try to recruit some people for each category with special emphasis on the later (more difficult to get) groupings as listed below:


  1. on screen, immediate answers
  2. on screen, day later answers
  3. on paper, immediate answers
  4. on paper, day later answers

Obviously the more participants you get involved, the better. In a controlled experiment random assignment of subject to category would be a given, but I don't think random has meaning here - just make sure that if you have several volunteers you don't accidently group them into categories by obvious control variables like age.

I'm not trying to settle anything here - the goal here is to see whether more controlled research is likely to prove warranted. I.e. your work will ultimately support a grant proposal for somebody - not necessarily me.

Please download the test document (happy.pdf) from here and point people at the questions here.

The story, incidently, is something I wrote for LinuxWorld a few years ago and reflects a rather bitter real life experience.

Implicitly the present experiment has the hypothesis that it's possible to get interesting results in this way. In a more formal, i.e. controlled, setting my hypothesis would be that people reading the case on paper will do consistently better than people reading the case on screen, with the greatest agreement between the two groups on immediate questions relating to pervasive emotional content and least on day-later factual questions whose source lies near the middle of the case document.

Please help - the results won't have value unless lots of people - 100s - contribute - and feel free to contact me directly if you have questions or concerns that you don't want to raise in the discussion/comments pages here. That's murph at winface, .com, of course.


Topic: Software

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • No real need Murph.

    Just talking about myself here. But if I go to the trouble of picking up a piece of paper to read it, then I actually read it. Assuming there is a reason to read it of course. When it comes to on screen, I "skim" read most things.

    No doubt in my mind which offers better retention...
  • Interesting observation

    Just a personal anecdote, but I find that, for me, a lot depends on the position of the monitor. When the monitor is mounted high on the desktop I tend to skim through documents, so I tend to mount my monitor low, near the keyboard.

    When I want to read thoroughly I get much better results when I'm using my laptop IN MY LAP or synching the document to my Palm device. AAMOF, I don't personally notice any difference between using my Palm device or paper.
  • Go back to school Murph

    And you may learn how to run research projects. I'm not even going to bother with critting your methodology - you don't have one, nor do you have double blinds or any possibility of repeatability or validation - but hey that doesn't matter you'll publish results anyway.

    The way to hell is paved with good intentions.
    • umm, Tony?

      Did you read the bit about the purpose of this effort?
  • Your results

    NextStage has done several tests along these lines. I'd be interested in learning the outcome of your tests and discovering how our results compare. This would also provide some of the other commenters with information on how different research paradigms effect the outcome of the experiment. Thanks - Joseph
    • a comment about results

      Whether we see results here or nor depends on participation - so far I have about 40 answers and that's not enough.

      In the longer run, I hope to do a controlled experiment using several hundred future MBAs. When that happens, I'll mention the result in my normal (blogs/ blog.
  • Start with a survey

    Conducting an unscientific experiment is a little too much work.

    Besides, I think it mostly boils down to habit. Pretty much everyone grows up with paper, even those who regularly use computers.
    Erik Engbrecht
  • Will see whether I can contribute to the survey

    Not sure whether my colleagues are interested. The result will be skewed because of the high staff turnover rate in our company and thus most staff are under 30. I am interested in this survey because that's my own question also.

    However, the main drive for helping this survey is your happy.pdf. Previously, I thought that only our company could screw things up this way. Now I know that I am not alone...
    • Thanks! - more is better

      The skewing you mention isn't an issue - the object here isn't to get research quality results, just to see whether doing the research could be worthwhile.
  • Side note

    When we did on-line testing in Utah schools, we got lower scores across the board, in Math and Reading. I don't think anyone has looked at why, but my guess was that kids are used to bubbling paper when it counts, or doing online stuff when it doesn't. It is just too easy to click an answer, any answer, and move on. So they do. Also, with no pencil needed to answer the problem, there is less writing things down and working things out in on-line testing, despite my best efforts to encourage kids to do so.
    Not really a comment on retention, just an idea that maybe, if you test the people in both paper and screen with the same format of test; you'll get better(more accurate)results. Avoid testing screen readers with screen tests and paper readers with paper tests.
    • Confused !

      I do not understand
  • Laudable experiment, but there's tons of literature on this

    And the answer is very clear: People not only have poorer comprehension and retention on screen, they also read [i]differently[/i].

    Scanning versus reading:

    With regard to screen size:

    In the latter, Jakob Nielsen states: "Low-resolution monitors (including all computer screens until now) have poor readability: people read about 25% slower from computer screens than from printed paper. Experimental 300 dpi displays (costing $30,000) have been measured to have the same reading speed as print, so we will get better screens in the future. People will simply not read long texts at a reduced reading speed, so unless they have much better screens, electronic books will have a problem."

    As I said, great that you're willing to do this. The methodology is at an appropriate cost -- it's a great illustration for the kinds of things people can do in usability if they want to. Nobody knows how to design experiments anymore. I'd like to see someone apply similar ingenuity to creating a real test of the assertion that Fitt's Law justifies separating application menues from the primary application window.
    • literature and applicability

      I don't know whether or to what extent useability data can be applied to more general, particularly longer term, retention and comprehension issues.

      Equally importantly, the technogy changes and so do the readers - so do ten year old results really apply today?

      In both cases I'd lean to yes, but I don't know that the evidence is in - and I'd like to find out.