Amazon Web Services is rolling out new EC2 instance types designed for in-memory applications and heavy analytics workloads. The company also launched Data Pipeline, which moves information through various systems.
The new tools were unveiled by Amazon CTO Werner Vogels at that Amazon Web Services (AWS) re:Invent conference in Las Vegas.
AWS Data Pipeline is an orchestration service that automates data flows to be routed in analytics system, S3 and other services. In a nutshell, Data Pipeline is a web service that runs around and collects information from disparate systems. A demo highlighted how to create new Hadoop clusters as well as data nodes.
In addition, AWS is rolling out new EC2 instances designed for analytics. Specifically, AWS is rolling out a cluster high memory EC2 instance type. This instance type includes 240 GB of RAM and 2 x 120GB solid-state storage drives. "This is the instance type you want to use for in-memory," said Vogels.
The other EC2 instance is a high storage one that comes with 117GB of RAM, 24 hard drives and 48TB.
With these move, Amazon is moving toward providing big data and analytics infrastructure. Coupled with AWS' move to offer data warehousing infrastructure as a service, it's clear the company is aiming to build an analytics suite of tools.
More: Amazon's Vogels: Next-gen IT architectures need to be 'cost aware' | Amazon Web Services: Rackspace's OpenStack low on customers' radar | Amazon Web Services launches Redshift, datawarehousing as a service| Amazon Web Services cuts S3 prices, knocks old guard rivals | BitYota launches, eyes data warehousing as a service | NetApp, Amazon Web Services set hybrid cloud storage pact