Why you can trust ZDNET : ZDNET independently tests and researches products to bring you our best recommendations and advice. When you buy through our links, we may earn a commission. Our process

'ZDNET Recommends': What exactly does it mean?

ZDNET's recommendations are based on many hours of testing, research, and comparison shopping. We gather data from the best available sources, including vendor and retailer listings as well as other relevant and independent reviews sites. And we pore over customer reviews to find out what matters to real people who already own and use the products and services we’re assessing.

When you click through from our site to a retailer and buy a product or service, we may earn affiliate commissions. This helps support our work, but does not affect what we cover or how, and it does not affect the price you pay. Neither ZDNET nor the author are compensated for these independent reviews. Indeed, we follow strict guidelines that ensure our editorial content is never influenced by advertisers.

ZDNET's editorial team writes on behalf of you, our reader. Our goal is to deliver the most accurate information and the most knowledgeable advice possible in order to help you make smarter buying decisions on tech gear and a wide array of products and services. Our editors thoroughly review and fact-check every article to ensure that our content meets the highest standards. If we have made an error or published misleading information, we will correct or clarify the article. If you see inaccuracies in our content, please report the mistake via this form.


Your next laptop may be designed by AI

With its Max Q technologies, Nvidia collaborates with its leading rivals to implement AI that optimizes component placement and the routing of power and heat between CPU and GPU.
Written by Ross Rubin, Contributor

Following CES, I wrote about how the major PC chip companies had presented ways for CPUs and discrete GPUs to coordinate operations and hand off to each other, resulting in higher performance and better power efficiency. This applied regardless of whether the company had a dominant position in discrete GPUs (Nvidia), a long history in both CPUs and discrete GPUs (AMD), or was ramping up a discrete GPU business (like Intel, which also showed off collaboration between its discrete and integrated GPUs in its CES keynote).

Nvidia's latest work in this area is noteworthy because it not only has to collaborate with competitors in order to achieve its goals, but also because the work is part of a broader set of initiatives called Max Q, a "capstone suite of technologies focused on laptops" that now uses AI to optimize system design. While much has been written about how AI is becoming more adept at developing software (such as DeepMind's AlphaCode's recent progress), there's been less focus on AI use to optimize computing architectures where there are still thorny problems to address. 

Max Q first focused on gaming laptops, a category that has grown from a few hundred thousand units per year a decade ago to now well over 20 million units per year, dwarfing sales of even popular dedicated consoles such as the PlayStation. In that time, we've seen the typical thickness of a gaming-focused laptop shrink from 30 mm or more to 20 mm or less. And even as many now weigh in at around the five-pound mark, they can still deliver top-end gaming experiences, offering 1440p resolution at high frame rates.

Also: Best gaming laptops: Top rigs for on-the-go gaming

When I wrote about Apple's M1 Pro and M1 Max chips last October, I noted how the limited game development for the Mac left it to compete with Windows PCs for pro design-focused apps but not for polygon-crunching top-tier games. Back in the Windows world, though, the greater power efficiency we have seen in such gaming-focused Windows laptops has affected PC vendors' design and marketing strategies. Notably, the progress in reducing their thickness and weight has led to toning down aggressive and imposing styling in favor of a more crossover approach.

This has been particularly evident in brands such as HP's Omen and Victus and Lenovo's Legion (even as brands such as Dell's Alienware and Acer's Predator continue to court hardcore gamers with their distinctive brand images). And while gamers may line up on either side of that aesthetic debate, few would refuse improvements in power efficiency enabling longer play times provided by improving performance per watt. While much of that is enabled by improvements in process technology, it can be dramatically furthered through system architecture optimization.

That's where AI comes in. According to Mark Aevermann, director of product management and marketing at Nvidia, the first version of Max Q's Dynamic Boost, in which power and heat are dynamically switched between CPU and GPU to improve system efficiency, was managed by code that reflected human engineers' predictions of how best to offset those loads. However, once Nvida added AI-focused Tensor cores to its GPU architecture, it realized that AI could do a much better job weighing what can be hundreds of inputs in real-time to determine where to shift the system power for the best possible efficiency. And unlike static human-developed algorithms, the AI gets better over time at that thermal arbitrage. The same goes for new CPU architectures such as Intel's recently announced Alder Lake. In fact, Nvidia says that it worked closely with Intel so that Nvidia's GPUs can manage both the performance and efficiency cores of Intel's 12-gen Core architecture.

This leads to another question: Why would AMD and Intel pursue such collaboration given that they both compete with Nvidia? While both of those CPU companies would love to see their GPU businesses expand, Nvdia has such a strong discrete GPU market presence today that, were the other companies to turn their back, it would result in worse performance in the important gaming and creative workstation markets, ceding an advantage to their main competitor or, even worse, Apple. In other words, AMD doesn't want to lose CPU business to Intel (and vice versa) because its chips don't work as well with Nvdia's GPUs.

A similar dynamic exists among the major laptop OEMs. Here, the company has worked closely with its customers to develop thermally efficient system designs that have helped with factors such as acoustics, temperature against the skin, and even optimized placement of non-graphics related components such as the Wi-Fi antenna. PC vendors don't want to cede the advantage of that expertise to other companies adopting the same chips. As Nvidia continues to push forward on future generations of its GPUs, it's clear that it will continue to work with customers and even competitors to ensure that the improvements are seen in a laptop's usage experience and not just its spec sheet.

Editorial standards