Tilera set to chase Intel into the cloud

Summary:The MIT offshoot, which designs chips that scale from 16 to 100 distinct compute cores, is taking on Intel in processors for cloud-computing tasks such as analysis of big data

Tilera, a chip designer spawned from research at the Massachussets Institute of Technology, has its sights on Intel's lead in cloud-computing hardware.

Tilera hopes its novel chip architecture, which puts up to 100 Risc-based cores on a processor, will push it ahead by avoiding the latency bottlenecks that plague Intel's x86 designs.

Founded in 2004, the Silicon Valley-based company has brought out three generations of processors. These have built on MIT alumnus and chief technology officer Anant Agarwal's work, which includes a chip with 16 cores on one die and a mesh networking architecture that avoids bus bottlenecks. The current generation of Tilera processors, the Tile-GX family, scales between 16 and 100 cores and is tailored to cloud-computing applications.

The company feels it has an edge on Intel thanks to its iMesh on-chip networking architecture, which allows for low on-chip latency in passing messages between the cores and promises better power efficiency. Tilera's head of marketing, Ihab Bishara, sat down with ZDNet UK to talk about what Tilera is doing to step its chips up from embedded applications, such as networking and video compression, into cloud computing.

Q: What kind of tasks best fit processors with so many cores?
A: We provide anywhere between 16 and 100 cores. If you look at the markets we're after — networking, multimedia and servers — for networking and multimedia there are applications with thousands of transactions in parallel, thousands of flows that you're processing, thousands of streams of multimedia-type requests, so [the application] has to be very parallel in nature to start with.

A lot of the angst against multicore/many-core in general is people think, "I've got my application that's single-threaded and I need to run it in a hundred threads." If you have an application that runs on a single thread, you have no hope of running it on a 100 cores. You need an application that's inherently parallel.

If you look at routers, services and switches, security boxes, media gateways — these are the top design wins, I'd say. And then on the multimedia side, there's videoconferencing through multipoint control units (MCUs), where you have many streams going on all at the same time.

Don't some people already run those tasks with application-specific integrated circuits (Asics) and field-programmable gate arrays (FPGAs), as they can be cheap and reasonably power efficient?
There is an option there. [But the choice is] "I have x86 [architecture] and I can develop software very easily, or I have Asics and FPGAs, which are not general purpose, will take a lot of time to develop, but they give me better energy, power."

What Tilera provides is flexibility and power at the same time.

What Tilera provides is flexibility and power at the same time. To give an example, the [yet-to-launch] GX3000 series will be equivalent to an [Intel] Sandy Bridge eight-core when it comes to video processing and it will be at 25W. Sandy Bridge does it around 130/150W. Also, it's still [programmable in] C and C++ — you don't have to do special programming or GPU programming, and you get the benefit of the lower power and space as well.

What applications exist that could make use of such a large number of cores?
I think the parallel applications are there in the embedded market and networking on multimedia. On the cloud side, the applications are already there. With the Facebooks, Googles, Zyngas, there are so many parallel applications and they need power efficiency, so that's where we fit.

Power is the biggest thing because [the big web 2.0] companies have optimised the rest out of it. The biggest chunk of power consumption is the processor now. That's the entry way for ARM into servers.

Is big data [the growing practice in the enterprise of pulling together and analysing large datasets from a variety of different sources] an area of opportunity for chips like this?
In general, when it comes to Web 2.0, it's very small tasks: you have a request for some data, you need to do a few analyses on it and send it back out. Very small tasks but thousands and thousands of them — that's the nature of Web 2.0 servers today.

If you think about Facebook or Microsoft, from a datacentre point of view, you'll see...

Topics: Emerging Tech

About

Jack Clark has spent the past three years writing about the technical and economic principles that are driving the shift to cloud computing. He's visited data centers on two continents, quizzed senior engineers from Google, Intel and Facebook on the technologies they work on and read more technical papers than you care to name on topics f... Full Bio

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Related Stories

The best of ZDNet, delivered

You have been successfully signed up. To sign up for more newsletters or to manage your account, visit the Newsletter Subscription Center.
Subscription failed.