An artist's rendering of a modular ARM-based Apple "Pi", circa 2019. (conceptual art courtesy Spidermonkey, Inc.)
With ZDNet having just celebrated its 20th anniversary in April of 2011 and publishing a number of retrospectives on how technology has evolved over the last twenty years, both Scott Raymond and I decided that it might make for an interesting thought experiment to try to project what personal computing might actually evolve into in the next decade.
Also Read: ZDNet 20th Anniversary Special Coverage
The business of trying to crystal ball or read the tea leaves of the computer industry has always been difficult, particularly when trying to project it out for more than two or three years if you aren’t an insider at companies that are directly working on actual technology roadmaps for semiconductors, system components, software and operating systems.
However, even if you are an insider, the gap between researcher and actual productization can often lead to very different outcomes based on market acceptance, real manufacturing costs and any number of other mitigating factors.
Still, there have been a number of recent advances that we have observed in personal computing, the modern datacenter as well as in consumer electronics and embedded systems that allow us to make a number of educated guesses as to how these trends might actually manifest themselves as real products in the future, as much as eight or ten years from now.
- Project Blade Runner: The user experience of 2019
- Project Blade Runner: Putting it all together in 2019
Certainly, I expect that we almost definitely have missed the mark on a number of details and will fail miserably to anticipate a number of new technologies or trends, or might even be too optimistic in terms of how quickly some of these ideas may be adopted.
But whether we see this in 2019, 2021 or even 2025, I think that this reference architecture –- which we have adopted the name “Blade Runner” in homage to Ridley Scott’s 1982 vision of 2019 is a good pinhole or an educated napkin drawing for a glimpse at the Personal Computer of the future.
The Blade Runner foundation: “The Hub”
The foundation or basic building block for our Personal Computer of 2019 is what we refer to as “The Hub”. We envision this as a flat, half-inch thick square slab with approximately the same area dimensions of today’s Apple Mac Mini –- about seven and a half (7.5”) inches or 20cm.
The Hub will contain all of the electronic components of what we think of on today’s PC motherboards, but much more integrated and miniaturized. The communications and memory bus, the CPU crossbar, the integrated controller electronics for all the I/O components as well as the networking and interfaces will all be housed in this device.
The desktop PC “case” which we know of today that is filled with wires and expansion boards will no longer exist. Instead, there will be snap-in modules that are more analogous to Lego Bricks that will click into the Hub that will perform the functions of the PC components that we recognize today -– CPU, Memory, Graphics Processing and Storage.
Since virtually all of our components in the PC of 2019 are completely solid-state, relatively low power and generate only a minimal amount of heat, the Hub could sit flat on the desk pancake-style, or in a vertical position.
It could also reside in a living room as an advanced set-top box, connected to a large high-definition display. The actual physical design aesthetics would largely depend on the vendor selling and marketing the system.
The Hub itself is the fundamental building block of the Blade Runner architecture.
While the average PC might consist of a single “Hub”, with a single CPU, memory and storage module, it would be possible based on our driving architectural decisions that multiple Hubs could be connected or stacked together to form much more powerful desktop computer systems, such as for creative content professionals, engineers, or high-end gamers.
We also recognize that many consumers might purchase highly integrated systems where the Hub and modules are in a single, enclosed, non-upgradeable unit attached to some sort of display, or integrated into the display itself, such as with a tablet or laptop computer or even a fully integrated desktop device like “The Screen”.
Also Read: I've seen the future of computing, It's a Screen (2008)
Also Read: 2016, You're watching the Linux Channel (2008)
For example, some companies such as Apple might decide to adapt this architecture to their own uniquely desired form factors -- even exotic saucer-shaped, stackable systems like the "Pi" shown in our title artwork created by our conceptual artist.
But the electronics themselves and basic building blocks in the Hub, regardless of actual end-user product configuration would still be the same.
The important thing to remember about the Hub is that the systems architecture for desktops, laptops, smartphones and tablets would be identical. There would be no important distinction between desktop and embedded systems programming from a developer perspective. We’ll get into this when we discuss the actual operating system that runs on Blade Runner.
With the trend in the consumer electronics industry toward completely integrated Systems on a Chip (SoC) we forsee that within eight to ten years, personal computing will adopt a similar systems architecture that we see in today’s smartphones and tablets. However, it will be much more scaled out than what we see in use today.
CPU and Memory
The typical central processing unit of 2019 will be an ARM-based architecture, running on a 32-way or 64-way SoC, clocked at approximately 1Ghz. Each of these will be a module that plugs into our Hub, which can accommodate multiple of such modules, perhaps as many as four.
Also Read: Was Intel's x86 the "Gateway Drug" to Apple's ARM? (2010)
Also Read: Do we need to wipe the slate with x86? (2008)
We expect that by 2019, semiconductor manufacturing processes using methods similar to Intel's "Tri-Gate" at 14 nanometers or less will be fully mature.
While the SoC and memory (RAM) could theoretically be separated from each other as distinctly separate modules, and forming a more monolithic systems architecture similar to what we use today, we thought that it might be interesting if each SoC was a self-contained CPU and Memory “Node”, much in the same way that compute nodes on a cluster behave on today’s scale-out supercomputers and as blades in a typical datacenter blade chassis.
The only difference of course is that instead of blade servers being housed in a large shared chassis on racks, we’re talking about 1.5” square “blocks” that are plugged into our Hub.
With current advances in memory density taken into account, we believe that each of these SoC/Memory modules could accommodate as much as 64GB of RAM each.
While this seems like a rather large amount of memory as well as a very large amount of cores by today's PC standards, we anticipate there will be a need for highly exploitive parallelized applications such as personalized deep analytics (a.k.a. "Watson on the Desk") to take advantage of it.
We also recognize that in 2019 and later, while the industry will move towards an ARM-based systems architecture, there will almost certainly be requirements for continuing to run legacy x86-based applications.
For that reason, we have created a special CPU/RAM module which we are calling a “LAP”, or Legacy Applications Processor.
The LAP would sit on the cluster crossbar/unified bus and snap into the Hub as the rest of the other CPU/RAM modules, but instead of containing a multi-core ARM processor, it would contain a multi-core x86-compatible low power CPU with onboard RAM, perhaps as much as 16GB.
Depending on how critical x86 binary compatibility continues to be, some vendors may actually choose to integrate the LAP into the Hub itself.
We forsee that the LAP would be roughly equivalent in processing power to a mid-range Intel i5 or AMD Phenom that would be available to consumers on business-class desktops in the next two years. It would certainly be more than suitable for running business applications written for today’s Windows 7 or the last x86-only version of Windows 8 or 9.
While storage could certainly be enclosed in the same module as CPU and RAM, we have decided that for simplicity of the design and how our proposed Blade Runner functions that it probably makes sense for storage to be separated from CPU and main memory.
Just as the SoC modules will plug into the Hub, so will storage.
In 2019, we expect that a typical PC will have well over 2 to 8 terabytes (TB) of NAND flash storage (SSD) in a single storage module configuration manufactured using 19nm or 12nm processes.
With the ability to have multiple (perhaps as many as four) such modules plugged into a single Blade Runner Hub, as much as 32 TB of online storage per PC might not be an unusual configuration for a creative content professional or someone who keeps a lot of compressed MPEG-4 720p or 1080p video around.
Storage on the Hub will be clustered and/or replicated based on end-user resiliency and business continuity requirements. We also anticipate that users will have a combination of localized storage on the PC itself and also on the Cloud.
However, when say Cloud, we mean to say both storage that is hosted on home premises as some sort of Internet-enabled NAS device (something like a PogoPlug on steroids) provided as Software or Storage as a Service from the today’s usual suspects (Google, Amazon, Microsoft, Apple, Rackspace, IBM, etc) or in a rented space that we refer to as the “Capsule Datacenter”
What’s a Capsule Datacenter? We imagine Mailboxes Etc or FedEx-Kinkos type businesses in your locality which instead of renting out mailboxes like they do today, will rent out shoebox-sized rack space to private individuals and small businesses where you can purchase or lease 32/64/128TB of centralized SSD combined with an ARM-based low-power virtual or physical server based on Blade Runner systems architecture for a minimal monthly or yearly fee.
This server will be VPN connected to your home PC via high-speed, fiber broadband connectivity coming out of the Capsule Datacenter, and will communicate and synchronize with web services and data from your traditional SAAS cloud providers.
Why Capsule Datacenters as opposed to buying SAAS and storage from the usual suspects entirely? As we’ve seen with the Amazon storage and elastic compute cloud failures in the last month, we expect that individuals and small businesses will want to have localized data replication in place in addition to the SAAS like Amazon Storage Cloud or Google Apps.
This would not only be for pure peace of mind and for business and personal data continuity concerns, but also because we think that locally-driven medium-sized franchise businesses like “Capsule Datacenters R Us” attached to dark fiber will be able to more efficiently and easily service home broadband customers and local businesses and provide higher levels of performance and superior customer service than regional Amazon or Google datacenters.
Graphics and Video
While we know that advances in miniaturization with SoCs will produce highly-dense, very fast, low-power, low-heat multicore ARM-based systems, one of the things which we struggled with was how to address things like GPUs.
As we all know, the Graphics Processing Units (GPUs) in today’s high-end workstations and gaming PCs produce large amounts of heat and draw considerable power.
While we envision that basic business graphics needs could be addressed by GPUs integrated into either each CPU/Memory node or into the basic Hub itself, we understand that there will be some users that will require more powerful multi-core GPUs that have special requirements.
For that reason, we believe that discrete GPUs will be needed as optional components in our conceptual design, we have accommodated for that by allowing the GPU module to stack on top of the CPU/Memory modules via a high-speed universal bus connector and draw power from a separate external power supply.
Depending on end-user requirements and performance characteristics, discrete GPU modules could be cooled via fan assemblies or even liquid cooling systems.
Display technologies will certainly advance considerably in the next 8 to 10 years. Even now, typical PCs are shipping with 1080p displays and some systems such as high-end Macs are shipping with higher than 1080p-resolution screens, such as the Apple Cinema Display or the NEC PA301W, which have a 2560x1440 resolution and 2560x1600 resolutions respectively.
While these higher-end displays are currently very expensive, we expect that by 2019, with economies of scale in LED display manufacturing, we will see commodity 25” full-aperture 4K-resolution monitors (4096x2304) at a $150-$200 price point ship in typical end-user desktop PCs, with larger than 1080p HD resolution displays on laptop computers and full-size tablets.
Blade Runner: The Operating System and User Environment
We believe that by the year 2019, or at least two computing generations before this architecture is in place, that virtualization will become the de rigeur method of deploying PC operating systems on end-user desktops.
Sometime before the ARM desktop transition, we expect that Windows, Linux, the Macintosh as well as smartphone and tablet-based ARM systems will use onboard firmware-based hypervisors to abstract the resources of the physical hardware from the OS, just as they are deployed in enterprises within datacenter environments today.
As such, we envision Blade Runner as a fully virtualized system.
Each compute/memory node plugged into the Hub is actually a fully autonomous node that boots off of a master copy of a firmware-based hypervisor stored on the Hub, which communicates over an internal high-speed mesh network/internal fabric that controls both IPv6 networking as well as I/O traffic to the datastores residing on the storage modules, which we envision as using a native clustered file system similar to VMFS-3, Oracle’s ZFS, or IBM’s GPFS.
This is roughly analogous to how VMWare vSphere clusters work in datacenters today, but this is scaled down to the size of a very small desktop computer.
We envision the Blade Runner cluster Type 1 hypervisor as being completely heterogenous, in that it can run any number of ARM-based operating systems, much as VMWare ESXi, XenServer or Oracle xVM VirtualBox can run a mix of environments today.
Also Read: Android Virtualization, It's Time
Additionally the Legacy Application Processor if present will be used to run older generations of virtualized Windows and Linux applications as required.
In 2019, we expect that several end-user platforms, including Windows, Mac and various flavors of Linux (Including Android or some other next-generation Google OS) will be available for use as the “Primary” or “Shell” OS.
We will not try to predict which one of these will be the dominant player, as that would be a completely futile exercise.
However, we believe it will be reasonable to expect that the Shell will boot as a virtual instance on the primary compute node, which will interact with the hypervisor in spawning Apps on the primary and other compute nodes connected to the Hub.
Apps, such as web browsers, productivity applications and games will all reside in clustered storage in virtual datastores as fully self-contained JeOS (Just enough OS).
In other words, the Apps themselves run in completely self-contained OSes on discrete virtual disk images, separate from the Shell and managed by the firmware hypervisor.
Unlike today, where Windows or even Linux applications may spread libraries all over the host OS and may create what is referred today as “DLL hell”, causing numerous issues with application upgrades and patching, an App such as Microsoft Word or Excel might be distributed via a Microsoft App Store as a virtual disk image, running on its own stripped down OS kernel such as “Midori”.
A web browser such as Chrome might actually be deployed as a VM instance of Chrome OS, complete with Linux kernel.
Also Read: Browsers, The Next Generation
Even if the Shell OS was Macintosh or even Android or Ubuntu, it wouldn’t particularly matter to the end-user.
The benefit to modularizing the OS into discrete VMs for shell and apps is significant. Firstly, the Apps and the Shell aren’t “installed” per se –- they are simply copied into clustered storage, where they are registered into a VM manifest, which is a database that contains the configuration information for each VM as well as customized user settings.
Access control lists and hardware-based Unified Thread Management (UTM) with Deep Packet Inspection (DPI) integrated tightly with the firmware hypervisor will determine which applications are allowed to talk to each other and to regions of storage on the cluster and over the network, and will always be on guard to see if any malware is being downloaded or is attempting to compromise a Virtual Machine.
Should an application VM somehow become infected by malware, it will be isolated to the application itself, and the hypervisor will be able to restore a “virgin” copy of the application from a stored VM template or via the App Store it was originally downloaded from.
This “framework” for our 2019 personal computer should serve as a base introduction. For more information, be sure to check out further posts from my colleague Scott Raymond, where he will delve into networking, wireless, peripherals and the consumer end-user experience of the Blade Runner architecture.
And be sure to listen in to our podcast as well.
How far do you think we are from our 2019 vision of the personal computer? Talk Back and Let Me Know.