One of the inconvenient truths about cloud migration is the migration part. Unless you are starting a new business or a new application with new data that resides in the cloud, you have to move the stuff from your data center to the cloud. Sure, there's a lot more bandwidth available across global fiber backbones today, but those backbones are about moving data that is already in motion. It just wouldn't be cost-efficient to size those backbones to transmit petabytes between two points because... well... migrating years of accumulated data from on premise to the cloud is not an everyday occurrence.
It's often described as the speed of light problem. If you have a 10 Gbps connection, that petabyte of data sitting in your data center will take 12 days to make it to the cloud. That's why early cloud migrations involving multi-terabytes of data and up typically used Sneakernet in the sky: Load a pile of data onto a secure disk (or tape) and ship it off to your favorite cloud vendor. It's the way Azuredoes it today, and before it introduced Snowball a couple years ago, that's how you ferried data to AWS.
Now that Google is getting serious about pursuing the enterprise cloud business, it too is getting into the migration appliance space. The logistics are similar - you go online and order the device, and then it's shipped to you for a set period of time before you send it back. But, not surprisingly, the designs are different.
Google has seen one petabyte-size migrations as the sweet spot of the market. It's introducing two models sized for 100 and 480 TBytes of data respectively. By comparison, Amazon's are at the lower end of the spectrum with 50, 80, and 100-TByte units. Of course, the big exception is Amazon Snowmobile, if you want to truck 100 PBytes on a 12-wheeler with a 45-foot container.
Google's pricing strategy is based on that petabyte sweet spot assumption, which equates to roughly two of its larger units. So while pricing for the smaller 100-TByte unit is on par with Snowball, at a petabyte, Google is listing the appliance at about 35% below.
But it's not just size that matters, but form and function. While Amazon's units are self-standing, Google's are meant to be plugged into a rack. On the other hand, Amazon is using Snowball as the start of a new family that supports edge computing: Greengrass for managing IoT applications. Bundled with local Lambda processing capability, Greengrass was designed on the assumption that even though Amazon is a cloud provider, that some use cases such as IoT will require local compute at the edge. Google is not foreclosing such options, but unlike Amazon, to do so would require a device with a different architecture and form factor.