Facebook has outfitted its new flagship Lulea datacentre entirely with self-designed server hardware, turning its back on the Dells and HPs of the IT industry.
Jay Parikh, head of infrastructure engineering at the social-networking company, revealed the divorce from OEMs (original equipment manufacturers) at GigaOm Structure Europe on Wednesday.
Once outfitted, Facebook's fresh-air-cooled datacentre in Lulea, Sweden, will represent the "first time Facebook supplies server hardware that is 100-percent not OEM", Parikh said.
Facebook's rejection of enterprise vendors comes after Amazon, Equinix and Red Hat's chief technology officers said traditional OEMs are under threat from the shift to commodity or self-designed hardware by large cloud operators.
Parikh said Facebook expects to fit its server hardware over the next year, as the company is still putting the finishing touches to the Lulea datacentre building.
The Facebook server designs being used will probably be submitted to the Open Compute Foundation in January to become its new generation of servers, he told ZDNet.
The social-networking giant finds that using its own equipment offers "significant improvements in terms of cost benefits... [and] energy efficiency", he added.
Facebook designs, tests and prototypes new server and storage gear in a busy hardware lab at its headquarters in Menlo Park, California.
As for the future, it is likely Facebook will try and use more of its own kit in more locations. It has previously reported having problems with airflow in datacentres that saw Open Compute servers sit alongside ones from typical OEMs.
"For us, we'll continue to focus on what gives us the best flexibility," Parikh said. "Our product is such that it's changing all the time. We've got to make sure the infrastructure continues to evolve and move quickly."