Windows 7 memory usage: What's the best way to measure?

Windows 7 memory usage: What's the best way to measure?

Summary: Windows memory management is rocket science. And don't believe anyone who tells you otherwise. Since Windows 7 was released last October I've read lots of articles about the best way to measure and manage the physical memory on your system. Much of it is well-meaning but just wrong. To help cut through the confusion, I've put together a tutorial and accompanying gallery that explains how to make the most of your memory.

SHARE:

Windows memory management is rocket science. And don't believe anyone who tells you otherwise.

Since Windows 7 was released last October I've read lots of articles about the right and wrong way to measure and manage the physical memory on your system. Much of it is well-meaning but just wrong.

It doesn't help that the topic is filled with jargon and technical terminology that you literally need a CS degree to understand. Even worse, web searches turn up mountains of misinformation, some of it on Microsoft's own web sites. And then there's the fact that Windows memory management has evolved, radically, over the past decade. Someone who became an expert on measuring memory usage using Windows 2000 might have been able to muddle through with Windows XP, but he would be completely flummoxed by the changes that began in Windows Vista (and its counterpart, Windows Server 2008) and have continued in Windows 7 (and its counterpart, Windows Server 2008 R2).

To help cut through the confusion, I've taken a careful look at memory usage on a handful of Windows 7 systems here, with installed RAM ranging from 1 GB to 10 GB. The behavior in all cases is strikingly similar and consistent, although you can get a misleading picture depending on which of three built-in performance monitoring tools you use. What helped me understand exactly what was going on with Windows 7 and RAM was to arrange all three of these tools side by side and then begin watching how each one responded as I increased and decreased the workload on the system.

To see all three memory-monitoring tools at work, be sure to step through the screen shot gallery I created here: How to measure Windows 7 memory usage.

 

Here are the three tools I used: 

Task Manager You can open Task Manager by pressing Ctrl+Shift+Esc (or press Ctrl+Alt+Delete, then click Start Task Manager). For someone who learned how to read memory usage in Windows XP, the Performance tab will be familiar, but the data is presented very differently. The most important values to look at are under the Physical Memory heading, where Total tells you how much physical memory is installed (minus any memory in use by the BIOS or devices) and Available tells you how much memory you can immediately use for a new process.

 

Performance Monitor This is the old-school Windows geek's favorite tool. (One big advantage it has over the others is that you can save To run it, click start, type perfmon, and press Enter. To use it, you must create a custom layout by adding "counters" that track resource usage over time. The number of available counters, broken into more than 100 separate categories, is enormous; in Windows 7 you can choose from more than 35 counters under the Memory heading alone, measuring things like Transition Pages RePurposed/sec. For this exercise, I configured Perfmon to show Committed Bytes and Available Bytes. The latter is the same as the Available figure in Task Manager. I'll discuss Committed Bytes in more detail later.

Resource Monitor The easy way to open this tool is by clicking the button at the bottom of the Performance tab in Task Manager. Resource Manager was introduced in Windows Vista, but it has been completely overhauled for Windows 7 and displays an impressive amount of data, drawn from the exact same counters as Perfmon without requiring you to customize anything. The Memory tab shows how your memory is being used, with detailed information for each process and a colorful Physical Memory bar graph to show exactly what's happening with your memory. I believe this is by far the best tool for understanding at a glance where your memory is being used.

You can go through the entire gallery to see exactly how each tool works. I ran these tests on a local virtual machine, using 1 GB of RAM as a worst-case scenario. If you have more RAM than that, the basic principles will be the same, but you'll probably see more Available memory under normal usage scenarios. As you'll see in the gallery, I went from an idle system to one running a dozen or so processes, then added in some intensive file operations, a software installation, and some brand-new processes before shutting everything down and going back to an idle system.

Even on a system with only 1 GB of RAM, I found it difficult to exhaust all physical memory. At one point I had 13 browser tabs open, including one playing a long Flash video clip); at the same time I had opened a 1000-page PDF file in Acrobat Reader and a 30-page graphically intense document in Word 2010, plus Outlook 2010 downloading mail from my Exchange account, a few open Explorer windows, and a handful of background utilities running. And, of course, three memory monitoring tools. Even with that workload, I still had roughly 10% of physical RAM available.

So why do people get confused over memory usage? One of the biggest sources of confusion, in my experience, is the whole concept of virtual memory compared to physical memory. Windows organizes memory, physical and virtual, into pages. Each page is a fixed size (typically 4 KB on a Windows system). To make things more confusing, there's also a page file (sometimes referred to as a paging file). Many Windows users still think of this as a swap file, a bit of disk storage that is only called into play when you absolutely run out of physical RAM. In modern versions of Windows, that is no longer the case. The most important thing to realize is that physical memory and the page file added together equal the commit limit, which is the total amount of virtual memory that all processes can reserve and commit. You can learn more about virtual memory and page files by reading Mark Russinovich's excellent article Pushing the Limits of Windows: Virtual Memory.

As I was researching this post, I found a number of articles at Microsoft.com written around the time Windows 2000 and Windows XP were released. Many of them talk about using the Committed Bytes counter in Perfmon to keep an eye on memory usage. (In Windows 7, you can still do that, as I've done in the gallery here.) The trouble is, Committed Bytes has only the most casual relationship to actual usage of the physical memory in your PC. As Microsoft developer Brandon Paddock noted in his blog recently, the Committed Bytes counter represents:

The total amount of virtual memory which Windows has promised could be backed by either physical memory or the page file.

An important word there is “could.” Windows establishes a “commit limit” based on your available physical memory and page file size(s).  When a section of virtual memory is marked as “commit” – Windows counts it against that commit limit regardless of whether it’s actually being used

On a typical Windows 7 system, the amount of memory represented by the Committed Bytes counter is often well in excess of the actual installed RAM, but that shouldn't have an effect on performance. In the scenarios I demonstrate here, with roughly 1 GB of physical RAM available, the Committed Bytes counter never dropped below about 650 MB, even though physical RAM in use was as low as 283 MB at one point. And ironically, on the one occasion when Windows legitimately used almost all available physical RAM, using a little more than 950 MB of the 1023 MB available, the Committed Bytes counter remained at only 832 MB.

So why is watching Committed Bytes important? You want to make sure that the amount of committed bytes never exceeds the commit limit. If that happens regularly, you need either a bigger page file, more physical memory, or both.

Watching the color-coded Physical Memory bar graph on the Memory tab of Resource Monitor is by far the best way to see exactly what Windows 7 is up to at any given time. Here, from left to right, is what you'll see:

Hardware Reserved (gray) This is physical memory that is set aside by the BIOS and other hardware drivers (especially graphics adapters). This memory cannot be used for processes or system functions.

In Use (green) The memory shown here is in active use by the Windows kernel, by running processes, or by device drivers. This is the number that matters above all others. If you consistently find this green bar filling the entire length of the graph, you're trying to push your physical RAM beyond its capacity.

Modified (orange) This represents pages of memory that can be used by other programs but would have to be written to the page file before they can be reused.

Standby (blue) Windows 7 tries as hard as it can to keep this cache of memory as full as possible. In XP and earlier, the Standby list was basically a dumb first-in, first-out cache. Beginning with Windows Vista and continuing with Windows 7, the memory manager is much smarter about the Standby list, prioritizing every page on a scale of 0 to 7 and reusing low-priority pages ahead of high-priority ones. (Another Russinovich article, Inside the Windows Vista Kernel: Part 2, explains this well. Look for the "Memory Priorities" section.) If you start a new process that needs memory, the lowest-priority pages on this list are discarded and made available to the new process.

Free (light blue) As you'll see if you step through the entire gallery, Windows tries its very best to avoid leaving any memory at all free. If you find yourself with a big enough chunk of memory here, you can bet that Windows will do its best to fill itby copying data from the disk and adding the new pages to the Standby list, based primarily on its SuperFetch measurements. As Russinovich notes, this is done at a rate of a few pages per second with Very Low priority I/Os, so it shouldn't interfere with performance.

In short, Windows 7 (unlike XP and earlier Windows versions) goes by the philosophy that empty RAM is wasted RAM and tries to keep it as full as possible, without impacting performance.

Questions? Comments? Leave them in the Talkback section and I'll answer them in a follow-up post or two.

Topics: Software, Hardware, Microsoft, Operating Systems, Windows

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

237 comments
Log in or register to join the discussion
  • Are you going to pay for it??

    Where do I send the bill??
    wackoae
    • forums

      @wackoae When you do this, witch be<a href="http://www.tran33m.com/vb/">t</a>ween apps, you will <br>seriously feel a performance impact.
      alasiri1
    • RE: Windows 7 memory usage: What's the best way to measure?

      @wackoae First of, because Ed noted, the majority of Web page Problems do not create I/O. Hard faults perform, however they don't imply that physical storage has been exhausted or perhaps is being inefficiently used, because some possess claimed.

      Keep within thoughts:

      one) Tough problems where SuperFetch is actually pre-loading code or even data may benefit efficiency, not really harm this.

      two) Hard problems exactly where SuperFetch is positively restoring webpages which have already been changed to drive back into memory when it is made available (such as after exiting a game) will improve efficiency, not really hurt this.

      3) Tough problems are produced whenever you run a program which wasn't within the <a href="http://www.bestexpediacoupons.com/">Expedia Coupons</a> cache, or a program delay-loads a DLL not really in the cache. Not really unexpected.

      four) Tough faults are produced when a program accesses a brand new web page of the memory-mapped file. For instance, the actual numerous Search Indexer procedures use mapped views with the index documents in order to improve the actual effectiveness with the indexing process. However this activates hard faults when that data hasn't already been cached.

      5) It is most likely the deserving trade-off to consider the tough problem on a track record process (computer virus scanner, indexer, and so on) if it stops the hard problem within the foreground process, modulo priority inversion concerns and early I/O (but background + cancellable I/O alleviates this particular).
      Belinda.Higgenbotham
    • RE: Windows 7 memory usage: What's the best way to measure?

      @wackoae Performance is not only problem compatibility also much important , But win 7 have some flaws regarding internet issue problem . XP have still A+ points comparison to Windows 7 <a href="http://www.mcitpmcts.com/mcdst.html">mcdst</a> better approach is use XP but need to improve more .
      sirmac9
  • Dirty memory and memory priorities

    Windows Vista and 7 have a few more tricks up their sleeves. Some of these features have no equivalent in
    other OSes - which is why one should be careful to
    apply "common" knowledge from other OSes or from
    earlier versions. This is where Craig Barth (Randal C
    Kennedy) failed.

    As many have mentioned, Windows Superfetch is not just
    a post-cache as exists in most/all modern OSes.
    Superfetch is a <i>pre-cache</i> which will use
    predictive heuristics to load into RAM applications
    and even data files <i>before</i> you use them for the
    first time. The heuristics is based on past behavior
    of the user, time of day, day of week and current
    activities.

    Windows also has <i>memory priorities</i>. Memory
    priorities influence what memory will be deallocated
    first and which memory pages have priority on physical
    memory. This is important especially for a desktop
    operating system which must balance background
    workload (throughput) against foreground
    responsiveness.

    Most OSes use a LRU algorithm when swapping (least
    recently used). This is generally a good choice when
    you value throughput - such as on a server. On a
    desktop OS with asymmetrical requirements LRU has some
    weaknesses.

    Consider search indexing, disk defragmentation,
    superfetch and other housekeeping tasks. These all
    demand memory to run. Even if they execute with low
    CPU priority and will yield CPU as soon as it is
    needed, you used to risk these processes conspiring on
    you when you went to lunch. When you came back these
    processes had caused all memory allocated to
    foreground processes (Word, Outlook, browser etc) to
    be swapped to disk. The result was that practically
    every task you performed after lunch would require
    memory to be swapped in from disk. Not until you had
    massaged the system would it be fully responsive
    again.

    This is where memory priorities offer a unique
    solution on Windows Vista/7. The background processes
    allocate memory with a lower priority. Hence, the OS
    will *not* swap out foreground process memory (or will
    swap it back ASAP) when a process require memory with
    a lower priority. Instead the OS will choose another
    equally low memory page to swap out.

    <b>This last point is where Randal C Kennedy
    completely failed</b>. He defended his claim that
    Windows 7 computers maxed out memory referring to the
    number of page "faults" - where memory required for a
    process needs to be swapped in from page file. With
    the Vista/7 memory model you cannot just look at the
    number of page faults and go "OMG it is trashing!".

    Higher page fault rates may simply be because
    background processes are being referred to the page
    file more often. You need to look at <i>why</i> memory
    page requests "fault" and which processes are hit with
    it. If it is the search indexer it is perfectly ok. It
    certainly does NOT mean that the memory is maxed out.
    honeymonster
    • Sorry, but a large number of page ins

      WILL cause performance degradation, regardless of the reason, and
      happens when there is insufficient physical memory to hold your
      processes.

      I don't care what beautiful algorithms Windows has in place to handle
      memory. The fact is, they come into play when you are starting to run out
      of physical memory.
      frgough
      • You're confused

        Page ins are meaningless in terms of performance if they're soft faults from pages that are already resident in physical memory (ie, the Standby List). They're only an issue if they're hard faults, which are served from disk or from the pagefile.

        And yes, they would have come into play had I run out of physical memory. If I had trie to run Photoshop and Premiere on this system I could have ground it to a halt. But doing mainstream business tasks I never, ever exhausted physical memory on a system with 1GB of RAM.

        Updated to add: In fact, some of those page-ins represent Windows actively filling the Standby List with with code and data (at low I/O priority) based on SuperFetch settings. Those are counted as page ins but certainly have no effect on performance.
        Ed Bott
        • I didn't see anything in his post which showed he was confused.

          What he said was accurate. He just didn't differentiate between soft and hard page-ins. Given the context I would think hard page-ins would have been assumed.

          Likewise what you said is accurate too.
          ye
          • Explanation

            Direct quote from frgough: "a large number of page ins will cause performance degradation."

            No, that is not true. A large number of hard faults will cause performance degradation. If one confuses page ins with hard faults, then one draws the original conclusion.
            Ed Bott
          • Same thing.

            Given the context a page-in is the same as a hard fault. In Solaris a soft fault is called a minor page fault whereas a hard fault is called a major page fault. Different terminology for the same thing given the context.
            ye
          • Performance Counters, please

            In the context of this post, we need to be talking about WMI counters available through one or more of the three Windows tools I describe here. I presume this can be measured in Perfmon. Which counters are you talking about?
            Ed Bott
          • @ Ed Bott: The same one's you're referring to.

            I don't know how to be any clearer Ed...the two of you are talking the same thing using different words.
            ye
          • paging hard/soft

            The essential difference between a hard fault and a soft fault does not
            involve direction, but only work. A "fault" page is overwritten with
            something before it is given to its requestor. A soft fault involves the
            clearance of that page (with 0's), or fills with data from another place
            somewhere in, or very close to, main memory. A hard fault involves a
            page that must pass the I/O controller before the RAM can be modified.
            In or Out, a hard fault includes a memory wait to clear a page prior to
            buffering the data, to guarantee that the only data present is written
            there by the process calling on it.

            None of these is related to whether the page was coming or going.
            gjsherr
          • Hard or soft, I didn't ever notice it in my experience

            the priority is lower then teh active process. Tweakers, real Tweaks in XP would have adjusted this setting in the Processes task settings. Example in Task Manager by right clicking using the processes tab. But in Vista and 7 this is already preconfigured. It is even more effective at as you use it on a regular basis, 7 this features does work better, and is able to better predict your moves. Who care truely how it works inside the technology, it just works as advertised.

            This Artical is mis-understood by the negative responces. Remember the Author used 1 GB to express how functional it is with minimal RAM installed which is required by the OS setup. It wasn't used to express that you will have a high performance Beast using 1 GB. It's goal was to express solely how well Windows can manage that RAM. Peps need to stop putting text in this Authors descriptions. All it was saying within limits of similar XP requirments, Vista and 7 can do similar with far better memory management, then anything else MS as released, and does it more efficently!! And it does this wanting more member for core components, and services
            Ez_Customs
      • This is incorrect.

        First of all, as Ed noted, most Page Faults do not generate I/O. Hard faults do, but they do NOT mean that physical memory has been exhausted or is being inefficiently used, as some have claimed.

        Keep in mind:

        1) Hard faults where SuperFetch is pre-loading code or data will benefit performance, not hurt it.

        2) Hard faults where SuperFetch is actively restoring pages which have been swapped to disk back into memory when it is made available (such as after exiting a game) will improve performance, not hurt it.

        3) Hard faults are generated when you run a program which wasn't in the cache, or a program delay-loads a DLL not in the cache. Not unexpected.

        4) Hard faults are generated when a program accesses a new page of a memory-mapped file. For example, the various Search Indexer processes use mapped views of the index files to improve the efficiency of the indexing process. But this triggers hard faults when that data hasn't been cached.

        5) It's probably a worthy trade-off to take a hard fault on a background process (virus scanner, indexer, etc) if it prevents a hard fault in the foreground process, modulo priority inversion concerns and untimely I/O (but background + cancellable I/O alleviates this).
        BrandonLive
      • I smell a XP FanBoy

        Comming from somone who is a Fan Boy of XP, I bet. XP has this Problem. Vista and 7 don't. If it does happen, it's rare and you more then likely, will not notice it unless you have poor hardware (Memory respectively) config. Vista and 7, will perform such a function that you describe while in an idle state usually if possible, if it has to be done that second, the process will have low priority. Result is seemless, and mostly unknoticed! Better learn to shop the hardware, and don't buy it just because the price makes it look like top of the line stuff!

        No I am not trying to start an arguement, but I Have been in Vista Sence day 1, and Windows 7 shortly after Vista release.
        Ez_Customs
  • Memory may still be scarce

    Virtualization is becoming more and more
    common. When you run multiple virtual machines
    on one physical machine, available memory often
    becomes the bottleneck that limits the number
    of VMs.

    Hence, responsible memory management is still a
    virtue. Perhaps not for the home/laptop/desktop
    user, but for enterprise users on virtual
    machines it *is* an issue which cannot always
    be solved by throwing RAM block into the
    machine.
    honeymonster
  • That is so wrong

    "On a typical Windows 7 system, the amount of memory
    represented by the Committed Bytes counter is often
    well in excess of the actual installed RAM, but that
    shouldn?t have an effect on performance."

    Commited memory is the memory that is actually used by
    your processes. This virtual memory can be either
    mapped to physical memory or to swap. As long as you
    are using applications that have their memory mapped
    to physical memory everything is swell and this
    application does not suffer negative impact from the
    lack of physical memory. But application whose data is
    swapped are stopped as they cannot work.

    If you switch to an application that is sapped, the
    operating system will have to rearrange memory,
    loading to physical memory the working memory of the
    app you need and putting as much as it can to swap to
    allow for that.

    When you do this, witch between apps, you will
    seriously feel a performance impact.

    Any app that as to perform background work will also
    be very impacted by lack of physical memory even if
    the system does not warn you about lack of memory. You
    are only warned of lack of virtual memory
    s_souche
    • Not quite

      You assume that every process which allocate memory are using it all the time.

      If every application was constantly reading and writing all of its allocated memory, you would be right: They would be fighting for physical memory.

      But as it turns out, *most* applications actually allocate memory but do not access all of it all the time. Few applications do, actually.

      Consider Word loaded with a big/huge document with lots of images etc. What urgently needs to be in memory is actually only the part you are presently viewing and the immediate content to which you are likely to scroll. If you page up/down there is *ample* time to swap in memory from disk.

      Consider search indexer. It is a background process which will spring into action when 1) it detects that changes may have been made and 2) you machine is idling. It may be preempted when you are using your computer - like scrolling in Word. Should it just throw away everything it had done until preempted, or should it just accept that it's memory will be paged to disk? The latter is in fact *perfectly ok* - as the process itself is low priority.
      honeymonster
      • Depends on what you are doing

        If you have multiple applications running that
        have allocated more than your physical memory,
        and that are actually running, and not sitting
        around, you can expect them to use allocated
        memory. A word document is using the memory it
        allocated, You are perhaps not using a any
        given time all the pages of your document, but
        if the doc is swapped and you start a search or
        jump to another page, you will feel the perf
        impact real hard, as you'll have to wait fo
        windows to have finished unswapping.
        s_souche