For decades physical memory was incredibly costly. As a result many machines came with memory capacities that today look laughable. Not gigabytes, or megabytes, but kilobytes. The smallest IBM 360 mainframe made do with as little as 8KB of main memory in the 1960s. Even in 1980, RAM cost over $1000 a megabyte. Today, it's half a penny.
Because memories were so tiny, every program had to be shoehorned into the available capacity. Programs went to elaborate lengths to move segments in and out of physical (RAM) memory.
But in the 1970s architects had a brilliant idea: extend physical memory by virtualizing it and using hard drive storage for the added capacity. The operating system would handle the problem of moving program segments into and out of physical memory, saving programmers untold grief.
And they did. IBM's 370 series MVS (Multiple Virtual Storage) and DEC's VMS (Virtual Memory System, running on the Virtual Address Extension (VAX) hardware) were two popular instantiations of virtual memory systems. At the time the details -- such as best page sizes and limiting thrashing -- were hotly debated.
Over time, the best all around designs were proven, optimized, and copied. Today virtual memory support is built into x86 processors -- and most other CPUs -- and supported operating systems. As a result most people have no idea it even exists, it is so seamless and efficient.
Virtual memory basics
When you fire up an application, the operating system assigns it a virtual address space. In Windows 8.1 and later with 64-bit apps, that address space is a hefty 128TB, while macOS offers a ginormous 18 exabytes of addressable space for 64-bit processes. 32-bit apps are limited to a maximum of 4GB of virtual memory and sometimes less.
The operating system -- Windows, Linux, or macOS -- then manages the virtual to physical address translation and swapping in and out of physical RAM of active program segments. Typically the segments (or pages), are 4 or 8KB. The CPU provides hardware assist to the OS to keep track of millions or even billions of pages.
How does the OS know which pages to keep in physical memory? The OS keeps track of the least recently used pages for each process and as demand for more physical memory develops, it swaps out the least used pages to free up RAM for more active pages.
Naturally, the speed with which pages can be swapped has a huge impact on system performance. That's why advanced PCIe/NVMe drives -- such as those in the latest MacBook Pros -- are vital.
Storage latency is critical
With access times measured in microseconds, the time it takes to swap a page is reduced from 6-10 milliseconds on a hard drive to, potentially (I haven't seen access times for the new MacBook Pro SSDs), 10 µsec -- a thousand times faster. Since many PCIe/NVMe demos have access times in the 3-4 µsec range, the new Mac SSDs could be 2-3 thousand times faster than a hard drive.
More recent Macs use SSDs. Again, latency benchmarks are scarce, but other high performance SATA SSDs have latencies in the 100-150 µsec range, making the new Mac SSDs potentially 10-50x faster. The bottom line is that very fast storage makes virtual memory run faster and makes a heavily loaded system much more responsive and performant.
Like any system, a virtual memory system can be overloaded. If you're rapidly switching between memory intensive apps, such as a video editor, Photoshop, a DAW, a video effects program, and a compression tool, your system may slow down as the swap traffic overwhelms the I/O system. Pro tip: don't do that!
The Storage Bits take
From a cutting edge feature in the '70s to omnipresent and forgotten in the '10s, virtual memory is the technology that enables your notebook or desktop to run data sets that are way beyond RAM size. I commonly edit 250GB ProRes video files on my five year old 16GB MacBook Pro -- without maxing out RAM usage.
In my experience, most performance problems that pro apps have are not virtual memory related -- assuming sufficient storage capacity -- but revolve around inefficient code, insufficient CPU or graphics performance, or other system bottlenecks in drivers or networks. In the future, with the advent of high-performance non-volatile RAM, we may be able to do away with virtual memory systems altogether, replacing it with multi-terabyte main memories that combine RAM and storage into a single address space.
But that's at least five years off and a story for another post.