X
Tech

Scaling Storage

Last week at the Parallels Summit, I sat down for a chat with Jerome Lecat, the CEO of Scality. It was an interesting conversation, as Scality seem to be close to solving one of the biggest IT problems – dealing with extremely large scale storage.
Written by Simon Bisson, Contributor and  Mary Branscombe, Contributor

Last week at the Parallels Summit, I sat down for a chat with Jerome Lecat, the CEO of Scality. It was an interesting conversation, as Scality seem to be close to solving one of the biggest IT problems – dealing with extremely large scale storage.

It’s a problem I’m familiar with, having built more than one large web mail system over the years. Oracle failed to scale, and distributed technologies like MailDir needed large storage arrays presented as a single LUN for really large systems – something that slowed things down considerably. It was that very problem that Lecat was trying to solve when he came up with the architecture and approach used by Scality. How do you build mail systems that compete with Gmail – and scale to those many millions of users with many gigabytes of mail?

Traditional technologies don’t work when you’re storing billions of objects. Databases slow down, and file system approaches start to fail at around 100 million objects. The real issue, though, is user expectations. Users want that mail at their fingertips, no matter how old it is. Then there’s the need to deal with changes in storage hardware, with the move from spinning disk to SSD and beyond. Finally any system needs to work 24x7, able to carry on running while undergoing maintenance.

Scality aims to solve that problem using a distributed architecture, based on a ring of servers. Objects are distributed across the ring, with a directory on each server storing details of locally stored objects and of objects stored on nearby servers. Data is accessed without needing a central database using the CHORD peer-to-peer algorithm from MIT. It’s a similar approach to Skype, where peer directories help locate users quickly. Scality’s done a lot of work to extend CHORD for storage, adding support for storage policies and for data replication. It’s also designed its service for use with a wide variety of different storage hardware, so policies can handle data storage based on age, on storage speed, even on regulatory requirements. Lecat describes it as “making the data alive”.

Application servers feed data into the storage ring, which is made up of storage servers with 20 to 30 hard drives – of any type. There’s support for a tiered architecture, with slower WORM systems in the secondary tier for long term storage, or even using cloud storage like Amazon S3. Applications store data, and each piece has a UID, which can be a URI or a userID/mailID combination. Data is retrieved by contacting any server randomly, and the CHORD algorithm uses the local directories to quickly converge on the correct server. The maximum number of hops is log10 of the number of servers, and a single hop takes around 4ms. That means data can be retrieved very quickly, with exabyte (and even petabyte) performance. It’s also possible to quickly find data copies, and also to rebuild data if a primary store has failed – and with the service still running (as it’s been designed to work around failure, like many cloud architectures). Scality’s architecture means that it’s always load balanced, delivering the copy that’s found fastest.

The company is focusing on two initial markets, the first of which is email service providers. There it’s been deployed at Telenet (in Belgium) for more than 2 million users. The second is cloud storage providers, and Scality has just signed an OEM deal with Parallels where it will be the storage part of Parallels cloud automation platform.

Simon Bisson

Editorial standards