Amazon Web Services' (AWS) cloud storage platform S3 or Simple Storage Service today stores over 100 trillion objects.
AWS's Jeff Barr revealed the figure to mark S3's 15-year anniversary. AWS launched S3 publicly on March 14, 2006, four years after Amazon launched Amazon.com Web Services, although that was far from the cloud infrastructure service AWS is today.
S3 was AWS' first generally available service that promised developers cheap storage based on storage per month used. Five months later, AWS launched Elastic Cloud Compute (EC2), offering developers compute resources as well.
SEE: Power checklist: Local email server-to-cloud migration (TechRepublic Premium)
Barr recalls that AWS started S3's API started with a simple design. "Create a bucket, list all buckets, put an object, get an object, and put an access control list," notes Barr.
Barr also says S3 is designed to provide "eleven 9's of durability" meaning that an object stored in S3 has a durability of 99.999999999%.
In the 15 years since S3's launch, AWS has introduced a host of new services such as the S3 Glacier Deep Archive, a store for large volumes of data that isn't accessed often, various data replication services, security features, and its Snowmobile shipping container for migrating petabytes of data from on-premise data centers to AWS.
Barr notes that AWS recently "dramatically" reduced latency for 0.01% of the Put requests to S3.
"While this might seem like a tiny win, it was actually a much bigger one," Barr explains, as it helped avoid customer requests that time out and retry. Another benefit was that gave developers insights needed to reduce latency.
AWS today remains the largest cloud infrastructure provider, with quarterly revenues exceeding $12 billion and a $46 billion annual run rate. It's also become a star performer within Amazon, with former AWS CEO Andy Jassy recently taking over as chief of Amazon from Amazon founder Jeff Bezos.