Marking the next stop on its ongoing Internet of Things roadmap, Intel unloaded a new family of Xeon processors designed for punching out real-time analytics.
The Xeon processor E7-8800/4800 v3 product families come with up to 18 cores (a 20 percent increase compared the previous generation) and up to 45MB of last level cache.
With those figures combined, Intel touted the Xeon family additions could deliver up to 70 percent more results per hour compared to the v2 set.
Suffice to say, these chips are meant for massive big data sets run by large corporations and organizations relying on in-memory computing and big data to influence and shift their own global business practices.
Intel already has 17 manufacturers signed up, including Hewlett-Packard, Oracle, Cisco and Dell, among others.
With 12 processor models in the pipeline, the Intel Xeon E7 v3 family will run between $1,224 to $7,175 in quantities of 1,000.
The processor maker boasted these rates offer "ten times greater performance per dollar" while driving down the total cost of ownership by as much as 85 percent.
Intel also provided an update on its ongoing partnership and investment in Apache Hadoop-based software and services distributor Cloudera.
Just over a year ago, Intel jettisoned plans for its own Hadoop distribution, moving its funds over to Cloudera instead. At an estimated $740 million commitment, Intel insisted at the time that the stake in Cloudera was its largest-ever data center technology investment.
Now, celebrating the one-year anniversary of the deal, Intel provided an update on their collaboration, including four releases of the Cloudera distribution.
Looking forward, Intel and Cloudera announced the next upgrade (fueled by Intel's new Xeon processors) to the Hadoop distribution will provide for up to 2.5 times better off-load performance and encryption.
This should mean an entire Hadoop data could be encrypted using as little as one percent of the CPU's performance.
Furthermore, the update promises full database encryption is now possible with minimal impact to system performance. Thus, Hadoop jobs should run at a faster rate allowing for even more Hadoop jobs to run simultaneously, all in all offering more analytics at once.