Project Blade Runner: The personal computer of 2019

Project Blade Runner: The personal computer of 2019

Summary: How far do you think we are from our 2019 vision of the personal computer?


An artist's rendering of a modular ARM-based Apple "Pi", circa 2019. (conceptual art courtesy Spidermonkey, Inc.)

With ZDNet having just celebrated its 20th anniversary in April of 2011 and publishing a number of retrospectives on how technology has evolved over the last twenty years, both Scott Raymond and I decided that it might make for an interesting thought experiment to try to project what personal computing might actually evolve into in the next decade.

Also Read: ZDNet 20th Anniversary Special Coverage

The business of trying to crystal ball or read the tea leaves of the computer industry has always been difficult, particularly when trying to project it out for more than two or three years if you aren’t an insider at companies that are directly working on actual technology roadmaps for semiconductors, system components, software and operating systems.

However, even if you are an insider, the gap between researcher and actual productization can often lead to very different outcomes based on market acceptance, real manufacturing costs and any number of other mitigating factors.

Still, there have been a number of recent advances that we have observed in personal computing, the modern datacenter as well as in consumer electronics and embedded systems that allow us to make a number of educated guesses as to how these trends might actually manifest themselves as real products in the future, as much as eight or ten years from now.

Also Read:

Certainly, I expect that we almost definitely have missed the mark on a number of details and will fail miserably to anticipate a number of new technologies or trends, or might even be too optimistic in terms of how quickly some of these ideas may be adopted.

But whether we see this in 2019, 2021 or even 2025, I think that this reference architecture –- which we have adopted the name “Blade Runner” in homage to Ridley Scott’s 1982 vision of 2019 is a good pinhole or an educated napkin drawing for a glimpse at the Personal Computer of the future.

The Blade Runner foundation: “The Hub”

The foundation or basic building block for our Personal Computer of 2019 is what we refer to as “The Hub”. We envision this as a flat, half-inch thick square slab with approximately the same area dimensions of today’s Apple Mac Mini –- about seven and a half (7.5”) inches or 20cm.

The Hub will contain all of the electronic components of what we think of on today’s PC motherboards, but much more integrated and miniaturized. The communications and memory bus, the CPU crossbar, the integrated controller electronics for all the I/O components as well as the networking and interfaces will all be housed in this device.

The desktop PC “case” which we know of today that is filled with wires and expansion boards will no longer exist. Instead, there will be snap-in modules that are more analogous to Lego Bricks that will click into the Hub that will perform the functions of the PC components that we recognize today -– CPU, Memory, Graphics Processing and Storage.

Since virtually all of our components in the PC of 2019 are completely solid-state, relatively low power and generate only a minimal amount of heat, the Hub could sit flat on the desk pancake-style, or in a vertical position.

It could also reside in a living room as an advanced set-top box, connected to a large high-definition display. The actual physical design aesthetics would largely depend on the vendor selling and marketing the system.

The Hub itself is the fundamental building block of the Blade Runner architecture.

While the average PC might consist of a single “Hub”, with a single CPU, memory and storage module, it would be possible based on our driving architectural decisions that multiple Hubs could be connected or stacked together to form much more powerful desktop computer systems, such as for creative content professionals, engineers, or high-end gamers.

We also recognize that many consumers might purchase highly integrated systems where the Hub and modules are in a single, enclosed, non-upgradeable unit attached to some sort of display, or integrated into the display itself, such as with a tablet or laptop computer or even a fully integrated desktop device like “The Screen”.

Also Read: I've seen the future of computing, It's a Screen (2008)

Also Read: 2016, You're watching the Linux Channel (2008)

For example, some companies such as Apple might decide to adapt this architecture to their own uniquely desired form factors -- even exotic saucer-shaped, stackable systems like the "Pi" shown in our title artwork created by our conceptual artist.

But the electronics themselves and basic building blocks in the Hub, regardless of actual end-user product configuration would still be the same.

The important thing to remember about the Hub is that the systems architecture for desktops, laptops, smartphones and tablets would be identical. There would be no important distinction between desktop and embedded systems programming from a developer perspective. We’ll get into this when we discuss the actual operating system that runs on Blade Runner.

[Next: A Modular PC Architecture for the 21st Century]»

The Modules

With the trend in the consumer electronics industry toward completely integrated Systems on a Chip (SoC) we forsee that within eight to ten years, personal computing will adopt a similar systems architecture that we see in today’s smartphones and tablets. However, it will be much more scaled out than what we see in use today.

CPU and Memory

The typical central processing unit of 2019 will be an ARM-based architecture, running on a 32-way or 64-way SoC, clocked at approximately 1Ghz. Each of these will be a module that plugs into our Hub, which can accommodate multiple of such modules, perhaps as many as four.

Also Read: Was Intel's x86 the "Gateway Drug" to Apple's ARM? (2010)

Also Read: Do we need to wipe the slate with x86? (2008)

We expect that by 2019, semiconductor manufacturing processes using methods similar to Intel's "Tri-Gate" at 14 nanometers or less will be fully mature.

While the SoC and memory (RAM) could theoretically be separated from each other as distinctly separate modules, and forming a more monolithic systems architecture similar to what we use today, we thought that it might be interesting if each SoC was a self-contained CPU and Memory “Node”, much in the same way that compute nodes on a cluster behave on today’s scale-out supercomputers and as blades in a typical datacenter blade chassis.

The only difference of course is that instead of blade servers being housed in a large shared chassis on racks, we’re talking about 1.5” square “blocks” that are plugged into our Hub.

With current advances in memory density taken into account, we believe that each of these SoC/Memory modules could accommodate as much as 64GB of RAM each.

While this seems like a rather large amount of memory as well as a very large amount of cores by today's PC standards, we anticipate there will be a need for highly exploitive parallelized applications such as personalized deep analytics (a.k.a. "Watson on the Desk") to take advantage of it.

We also recognize that in 2019 and later, while the industry will move towards an ARM-based systems architecture, there will almost certainly be requirements for continuing to run legacy x86-based applications.

For that reason, we have created a special CPU/RAM module which we are calling a “LAP”, or Legacy Applications Processor.

The LAP would sit on the cluster crossbar/unified bus and snap into the Hub as the rest of the other CPU/RAM modules, but instead of containing a multi-core ARM processor, it would contain a multi-core x86-compatible low power CPU with onboard RAM, perhaps as much as 16GB.

Depending on how critical x86 binary compatibility continues to be, some vendors may actually choose to integrate the LAP into the Hub itself.

We forsee that the LAP would be roughly equivalent in processing power to a mid-range Intel i5 or AMD Phenom that would be available to consumers on business-class desktops in the next two years. It would certainly be more than suitable for running business applications written for today’s Windows 7 or the last x86-only version of Windows 8 or 9.

[Next: The extinction of Winchester disk technology and rise of personal Clouds]»


While storage could certainly be enclosed in the same module as CPU and RAM, we have decided that for simplicity of the design and how our proposed Blade Runner functions that it probably makes sense for storage to be separated from CPU and main memory.

Just as the SoC modules will plug into the Hub, so will storage.

In 2019, we expect that a typical PC will have well over 2 to 8 terabytes (TB) of NAND flash storage (SSD) in a single storage module configuration manufactured using 19nm or 12nm processes.

With the ability to have multiple (perhaps as many as four) such modules plugged into a single Blade Runner Hub, as much as 32 TB of online storage per PC might not be an unusual configuration for a creative content professional or someone who keeps a lot of compressed MPEG-4 720p or 1080p video around.

Storage on the Hub will be clustered and/or replicated based on end-user resiliency and business continuity requirements. We also anticipate that users will have a combination of localized storage on the PC itself and also on the Cloud.

However, when say Cloud, we mean to say both storage that is hosted on home premises as some sort of Internet-enabled NAS device (something like a PogoPlug on steroids) provided as Software or Storage as a Service from the today’s usual suspects (Google, Amazon, Microsoft, Apple, Rackspace, IBM, etc) or in a rented space that we refer to as the “Capsule Datacenter”

What’s a Capsule Datacenter? We imagine Mailboxes Etc or FedEx-Kinkos type businesses in your locality which instead of renting out mailboxes like they do today, will rent out shoebox-sized rack space to private individuals and small businesses where you can purchase or lease 32/64/128TB of centralized SSD combined with an ARM-based low-power virtual or physical server based on Blade Runner systems architecture for a minimal monthly or yearly fee.

This server will be VPN connected to your home PC via high-speed, fiber broadband connectivity coming out of the Capsule Datacenter, and will communicate and synchronize with web services and data from your traditional SAAS cloud providers.

Why Capsule Datacenters as opposed to buying SAAS and storage from the usual suspects entirely? As we’ve seen with the Amazon storage and elastic compute cloud failures in the last month, we expect that individuals and small businesses will want to have localized data replication in place in addition to the SAAS like Amazon Storage Cloud or Google Apps.

This would not only be for pure peace of mind and for business and personal data continuity concerns, but also because we think that locally-driven medium-sized franchise businesses like “Capsule Datacenters R Us” attached to dark fiber will be able to more efficiently and easily service home broadband customers and local businesses and provide higher levels of performance and superior customer service than regional Amazon or Google datacenters.

[Next: Graphics and Display technologies of 2019]»

Graphics and Video

While we know that advances in miniaturization with SoCs will produce highly-dense, very fast, low-power, low-heat multicore ARM-based systems, one of the things which we struggled with was how to address things like GPUs.

As we all know, the Graphics Processing Units (GPUs) in today’s high-end workstations and gaming PCs produce large amounts of heat and draw considerable power.

While we envision that basic business graphics needs could be addressed by GPUs integrated into either each CPU/Memory node or into the basic Hub itself, we understand that there will be some users that will require more powerful multi-core GPUs that have special requirements.

For that reason, we believe that discrete GPUs will be needed as optional components in our conceptual design, we have accommodated for that by allowing the GPU module to stack on top of the CPU/Memory modules via a high-speed universal bus connector and draw power from a separate external power supply.

Depending on end-user requirements and performance characteristics, discrete GPU modules could be cooled via fan assemblies or even liquid cooling systems.

Display technologies will certainly advance considerably in the next 8 to 10 years. Even now, typical PCs are shipping with 1080p displays and some systems such as high-end Macs are shipping with higher than 1080p-resolution screens, such as the Apple Cinema Display or the NEC PA301W, which have a 2560x1440 resolution and 2560x1600 resolutions respectively.

While these higher-end displays are currently very expensive, we expect that by 2019, with economies of scale in LED display manufacturing, we will see commodity 25” full-aperture 4K-resolution monitors (4096x2304) at a $150-$200 price point ship in typical end-user desktop PCs, with larger than 1080p HD resolution displays on laptop computers and full-size tablets.

[Next: The Blade Runner virtualized OS]»

Blade Runner: The Operating System and User Environment

We believe that by the year 2019, or at least two computing generations before this architecture is in place, that virtualization will become the de rigeur method of deploying PC operating systems on end-user desktops.

Sometime before the ARM desktop transition, we expect that Windows, Linux, the Macintosh as well as smartphone and tablet-based ARM systems will use onboard firmware-based hypervisors to abstract the resources of the physical hardware from the OS, just as they are deployed in enterprises within datacenter environments today.

As such, we envision Blade Runner as a fully virtualized system.

Each compute/memory node plugged into the Hub is actually a fully autonomous node that boots off of a master copy of a firmware-based hypervisor stored on the Hub, which communicates over an internal high-speed mesh network/internal fabric that controls both IPv6 networking as well as I/O traffic to the datastores residing on the storage modules, which we envision as using a native clustered file system similar to VMFS-3, Oracle’s ZFS, or IBM’s GPFS.

This is roughly analogous to how VMWare vSphere clusters work in datacenters today, but this is scaled down to the size of a very small desktop computer.

We envision the Blade Runner cluster Type 1 hypervisor as being completely heterogenous, in that it can run any number of ARM-based operating systems, much as VMWare ESXi, XenServer or Oracle xVM VirtualBox can run a mix of environments today.

Also Read: Android Virtualization, It's Time

Additionally the Legacy Application Processor if present will be used to run older generations of virtualized Windows and Linux applications as required.

In 2019, we expect that several end-user platforms, including Windows, Mac and various flavors of Linux (Including Android or some other next-generation Google OS) will be available for use as the “Primary” or “Shell” OS.

We will not try to predict which one of these will be the dominant player, as that would be a completely futile exercise.

However, we believe it will be reasonable to expect that the Shell will boot as a virtual instance on the primary compute node, which will interact with the hypervisor in spawning Apps on the primary and other compute nodes connected to the Hub.

Apps, such as web browsers, productivity applications and games will all reside in clustered storage in virtual datastores as fully self-contained JeOS (Just enough OS).

In other words, the Apps themselves run in completely self-contained OSes on discrete virtual disk images, separate from the Shell and managed by the firmware hypervisor.

Unlike today, where Windows or even Linux applications may spread libraries all over the host OS and may create what is referred today as “DLL hell”, causing numerous issues with application upgrades and patching, an App such as Microsoft Word or Excel might be distributed via a Microsoft App Store as a virtual disk image, running on its own stripped down OS kernel such as “Midori”.

A web browser such as Chrome might actually be deployed as a VM instance of Chrome OS, complete with Linux kernel.

Also Read: Browsers, The Next Generation

Even if the Shell OS was Macintosh or even Android or Ubuntu, it wouldn’t particularly matter to the end-user.

The benefit to modularizing the OS into discrete VMs for shell and apps is significant. Firstly, the Apps and the Shell aren’t “installed” per se –- they are simply copied into clustered storage, where they are registered into a VM manifest, which is a database that contains the configuration information for each VM as well as customized user settings.

Access control lists and hardware-based Unified Thread Management (UTM) with Deep Packet Inspection (DPI) integrated tightly with the firmware hypervisor will determine which applications are allowed to talk to each other and to regions of storage on the cluster and over the network, and will always be on guard to see if any malware is being downloaded or is attempting to compromise a Virtual Machine.

Should an application VM somehow become infected by malware, it will be isolated to the application itself, and the hypervisor will be able to restore a “virgin” copy of the application from a stored VM template or via the App Store it was originally downloaded from.

This “framework” for our 2019 personal computer should serve as a base introduction. For more information, be sure to check out  further posts from my colleague Scott Raymond, where he will delve into networking, wireless, peripherals and the consumer end-user experience of the Blade Runner architecture.

And be sure to listen in to our podcast as well.

How far do you think we are from our 2019 vision of the personal computer? Talk Back and Let Me Know.

Topics: Hardware, CXO, Processors, Servers


Jason Perlow, Sr. Technology Editor at ZDNet, is a technologist with over two decades of experience integrating large heterogeneous multi-vendor computing environments in Fortune 500 companies. Jason is currently a Partner Technology Strategist with Microsoft Corp. His expressed views do not necessarily represent those of his employer.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • RE: Project Blade Runner: The personal computer of 2019

    Interesting that your hub design is very similar to the desktop machine Convergent Technologies put out roughly 25 years ago. There they had a base box with the CPU and memory, and snapped in lego like pieces for disks and other peripherals.<br><br>The more things change...
    • Another problem is that 2019 is too close to 2011; the technology develops

      @oldsysprog: ... much slower than it might seem when we think of the future.<br><br>2003's PCs are not really that different from 2011 PC conceptually.<br><br>And besides tablets <b>becoming main idea what a PC looks like for average person</b> (kids of the future will laugh at thought that computer was a thing to which person should come-to to be used, since they will rarely see standalone machines/boxes), nothing much else will change.
      • RE: Project Blade Runner: The personal computer of 2019

        @denisrs "2003's PCs are not really that different from 2011 PC conceptually."

        1992 PC's aren't really that much different from 2011 systems.

        92: DOS 5.0, VGA graphics, 40mb to 120mb IDE hdd's. CD-ROM drives were available (remember caddies?). CRT displays up to 17 inches. Dial up modems, BBS systems and AOL met the need for connectivity.

        2011: Win7, HD graphics (other than enhanced cooling needs, the cards look the same as a 92 version). IDE is nearly gone, replaced with SATA. HDD size is astronomical in comparison, but the drives externally look the same. DVD and BD burners are externally identical to CD drives in 92 machines, and internally are the same concept. Widescreen LCD displays are the most radical change, along with the ease of setting up multiple monitors on a single box. Oh, and wireless high-speed internet.

        With that said, you put a 92 white box next to a 2011 white box and the average person wouldn't be able to tell which one is better. A techie would really have to look at the back of each machine to get the truth.

        I'm predicting that 2019 systems aren't going to be radically different than a 2011 system, and that you'll see companies running 2011 boxes in 2019, with users hesitant to change from Win7/8 to Win10. :)
        • RE: Project Blade Runner: The personal computer of 2019

          Oh, and processing speed obviously has changed, but the biggest is that chips in 92 didn't require a cooler (which amazes me when I think about it). I touched a running 486/50 proc with a finger...and burned it pretty good!
      • RE: Project Blade Runner: The personal computer of 2019

        @sframberger - you glossed over several major changes, including the demise of the floppy disk (CD drives were wild and exotic in 1992; I worked in a college where we had a total of four PCs with CD drives by 1994) and the rise of USB as a univeral connector. Mice were also somewhat exotic in 1992, as well as audio - few peripherals were present on the motherboard itself as well. The fact that your 2011 "white box" can be tiny and sitting on the desk or even on the back of a monitor is another overlooked fact, as well as the rise of multicore processors and parallelism, on-die cache and GPUs, etc. No, a lot has really changed, and that's not even considering the ecosystem around them - the switch from monochrome dot-matrix printers to the availability of cheap, full-color inkjet printers, the fall in price of scanners (which also cost a fortune in 1992 unless you were using a hand scanner), digital cameras, webcams, wireless peripherals, networked peripherals (printers, scanners, NAS, etc.), home RAID, etc.
      • RE: Project Blade Runner: The personal computer of 2019

        @DeRSSS You have it all wrong. Room temp superconductors and quantum computing will be standard fare. Your personal computer will be a dental implant with more memory capacity than currently available to the NSA, and AI capabilities that an<a href="">t</a>icipate your needs and desires, evaluate your environment, and help you make decisions in real time. Individual thoughts will become unique, and frowned upon by cutlural icons set up by the Michele Obama ad<a href="">m</a>inistration for your emulation. Happy Meals will still be illegal in San Francisco.
        • You might still get that Big Mac in NY but forget that Super Sized Coke

          I still like Arthur C. Clarke's "Brain Cap" concept. Grin.
    • Wrong Kemo Sabe!

      Solid state devices will be too vulnerable to China and India's "EMP" attacks. We will be back to room size vacuum tube 500Khz cpu's! LOL
  • I'll disagree with you on tha ARM part

    as in 8 years too much will change. I can see a hybrid chip on the horizon which is neither x86 nor ARM, instead a hybrid type chip that can be both, or neither, in which the chip will "load" the instruction set and configuration needed to run whatever program you choose.
    Bill Pharaoh
    • RE: Project Blade Runner: The personal computer of 2019

      @Bill Pharaoh What you're referring to is a Field Programmable Gate Array. <br><br><a href="" target="_blank" rel="nofollow"></a><br><br>While I see this approach being used for specific ASICs, I don't see this as something general purpose CPUs doing.
      • Yes, FPGA are super slow, super power-hungry, super pricy

        @jperlow: agree.

        As to BR project, I would think that it is almost too conceptually good to be ready by 2019.
      • RE: Project Blade Runner: The personal computer of 2019

        @jperlow Or perhaps the failed Transmeta Crusoe chip?
    • Presumption of ARM is naive

      @Bill Pharaoh The writer's presumption that ARM will completely (except in the legacy processor) x86/x64 chips is naive - Intel is hardly going to let that happen - they're already bringing out later this year new x86/x64 compatible chips using new designs that match/beat ARM power profiles
      • RE: Project Blade Runner: The personal computer of 2019

        @archangel999 The presumption that x86 is going to be a valid architecture 10 years from now -- 40 years after its introduction -- when compared with architectures that have no limitations imposed by legacy compatibility is also naiive.

        What Intel has introduced with Tri-Gate is a semiconductor manufacturing process. It is not an architecture. It could be applied to ARM, IBM's POWER, or any number of other architectures including x86. You're still going to need more transistors to scale x86 relative to everything else, even if the transistors are more dense.
      • RE: Project Blade Runner: The personal computer of 2019

        @jperlow "It could be applied to ARM" - it could be applied to AMD's Phenom IIs as well, but Intel has no incentive to let either of those things happen. :-) Any change of standard architecture is going to come from pressure from outside Intel. The question is who would apply that pressure? Windows 8 being available on ARM is one of the things that needs to fall into place, but who's going to introduce desktop-level ARM CPUS and who's going to make desktop ARM motherboards? In the past, vendors faced the wrath of Intel for just using AMD CPUs or making AMD motherboards.
      • RE: Project Blade Runner: The personal computer of 2019

        "The presumption that x86 is going to be a valid architecture 10 years from now -- 40 years after its introduction"

        ARM was invented in 1983 . . . not really that much younger. You act as if it were invented in 2005.

        The only reason ARM doesn't have a major legacy problem is because it hasn't changed its instruction set in many years, and it isn't holding onto things like the x86's "real mode" which is virtually unused (but still exists, last I checked).

        FYI - do you remember why CISC won the CISC/RISC wars?

        There's an advantage to CISC, otherwise it wouldn't be used so much. I don't think we should ignore the reasons why we stuck with CISC in the first place.

        "It could be applied to ARM, IBM's POWER, or any number of other architectures including x86."

        I'll bet you good money Intel is grabbing as many patents as it can on the process right now.
  • Memristors and pattern recognition

    Two things I think you need to take into consideration:

    Memristor technology will be mainstream within 10 years, which allows a combination of memory and logic functions that uses no power for long-term storage. It should fundamentally change computer design like the shift from structured programming to object-oriented programming, though the change will be only partly underway by 2019.

    The other thing is pattern recognition. Intelligent biological entities are good at pattern recognition and digital circuitry isn't. That's basically because psychologists still haven't figured out the [b][i]mechanism[/i][/b] biological entities use. There's an excellent chance they will have that pretty much solved by 2019. Once we know the process, it won't be difficult to replicate it in synthetic diamond (which handles heat a lot better than silicon) chips and software. With software that can do pattern recognition almost as well as biological entities, voice recognition software, image search, and a lot of other applications will make today's bleeding-edge software look like a kid's toy by comparison.
    • RE: Project Blade Runner: The personal computer of 2019

      @Rick_R This is a back-of-the-envelope systems architecture, obviously. As to actual software applications, when you have that many cores, there's a lot of things you can do with parallelism, including things like pattern recognition.

      As to stuff like memresistors, "racetrack memory", etc., we didn't want to project very experimental stuff into 2019/2021. We wanted to go with existing, proven technologies that would be refined.
  • RE: Project Blade Runner: The personal computer of 2019

    Personally, in 8 years, I see what most of us now think of as our desktop computers becoming personal tablets plus a variety of intelligent, wirelessly-connected peripherals scattered throughout our environment. There will be wireless mass storage modules, wireless keyboards, wireless printers, and wireless displays. It all exists now, it will simply interact more smoothly. The tablet we carry around will contain the processor. When we lay our tablet on our desktop, its touch screen will become an input device while moving display tasks to our larger desktop display automatically. Our tablet will automatically offer the wireless mass storage device sitting in the corner of our office as a place to save or load files. It will automatically add and remove printers as we move through the building so that our documents always print on the closest device. The main difference will be the intelligence added to peripherals to enable transparent interaction between our instant-on tablets and nearby devices. Nobody wants to deal with cumbersome OSes which have insane boot times any longer. Our instant-on tablets will gain power exponentially, doing more and more, until we realize we no longer need desktop PCs, just screens, keyboards, and off-device storage. I don't think even Microsoft realizes how much things are about to change in the next 8 years. Huge, sluggish, operating systems that take forever to start are simply going to vanish. We've tasted "instant productivity" and we want more.
  • RE: Project Blade Runner: The personal computer of 2019

    What I'd like for 2019 probably won't be available then.

    The personal PC is glasses.

    Glasses that offer a virtual hi-res display overlaying the real world, are able to track my gestures, provide a virtual keyboard and offer surround sound. With all the usual accoutrements of a 2011 phone wirelessly linked to the net. A twitch of the finger gives me info on anything or anyone I'm looking at and my work and play can be at a time and location of my choosing.

    The future's so bright, I gotta wear shades ;-)