Skip to content
★ Ashorasiguais.com ★

Redefining Open Source for the Data Center

Redefining Open Source for the Data Center

The Open Compute Project began redefining the phrase “open source” when it was formed (about ten years ago). Prior to that, the phrase had mostly been used to refer to software, while it had also been used to refer to hardware to a lesser extent.

The term “open” is used by OCP to refer to specifications for all parts of a data center, including servers, network switches, electricity, and cooling.

While this implies that OCP usage differs greatly from how the open source software community communicates, it also means that it is occasionally same.

This is because OCP has some of the same concerns as open source software developers, such as avoiding vendor lock-in and developing “vendor-agnostic” software.

Rob Coyle, co-lead of the Open Compute Project’s OCP Ready program, told DCK, “I like to think of vendor agnosticism or vendor neutrality as the slippery slope that led me all the way to open source.”

“The next natural step for me is to become full open source when someone realizes the value in a supply chain that can be made robust by having different suppliers for a piece of equipment that can fulfill the same performance requirements.”

On August 17, Coyle will present “Vendor Neutrality and Open Source: Optimizing the Data Center Design Process” at Data Center World in Orlando.

What Is OCP?

The Open Compute Project began as a Facebook internal project called “Project Freedom” in 2009. Andy Bechtolsheim, co-founder of Intel, Goldman Sachs, Rackspace, and Sun Microsystems, joined Facebook two years later to develop the Open Compete Project, tasked with sharing data center product designs and best practices.

“Basically, approximately ten years ago, they were going to consume so much server and hardware technology that they had to hand out all of their design files so that companies could build it for them,” Coyle explained. “That’s developed to [cover] the full data center ecosystem, which now includes buildings, heating and cooling, and a variety of other things not related to software.”

From there, these hyperscaler-focused designs have been customized to fit the demands of smaller data centers hosted by colocation providers and even on-premises data centers.

“We’ve seen these hyperscalar technologies trickle down for a long time, long before the Open Compute Project existed,” he said. “We’re seeing that trend continue, if not accelerate, with Facebook, Microsoft, and LinkedIn all relying heavily on open source OCP technology, and colocation facilities joining our OCP Ready program to signal to the market that they’re ready to embrace open hardware.”

OCP Ready is a self-assessment tool that allows data center operators to compare themselves to OCP specifications on anything from floor weight ratings to doorway heights and cooling supply.

OCP’s 21-inch Open Rack design, which can allow power densities up to 40kW per rack (but is larger and incompatible with Open19, another open source rack definition from LinkedIn that is now a Linux Foundation project), is included in the recommendations.

Although “the 19-inch standard rack is the prevailing approach,” OCP racks offer advantages, according to Coyle.

“At the back of our rack, we don’t have any connection points,” he explained. “You can push one of our racks practically all the way up against a wall and have access to all essential wiring at the front of our racks as long as there is sufficient airflow.”

Data center operators that use the racks can also engage in an equipment recycling program involving hyperscaler equipment that OCP runs with firms like ITRenew.

“You can get hardware that has been refreshed and repurposed that is pretty new at a considerably lower cost than you would pay for brand new servers from a regular vendor,” Coyle added. “A lot of the technology that Facebook buys is 18 months before it’s available on the free market.” They’ve been using it for around three years, so when it hits the secondary market, someone may acquire technology that’s only 18 months old for a significantly lower price.

“You don’t need a central UPS because of technologies like onboard battery storage,” he noted, “and you could delete that from the design of your data center, and that might be big cost savings in a smaller on-prem.”

Data Center World 2021

We asked Coyle for a synopsis of the talk he’d be giving at Data Center World this year.

“It’s a snapshot of present market conditions,” he explained. “How the epidemic over the last year or two has hastened the need for vendor agnosticism, how the next step is to become open source, and some examples of how that’s worked out incredibly effectively.”

“We built millions of square feet of data centers although the supply chain was severely constricted last year, at least in North America and the United States, where our GDP was reduced in half,” he continued. “I feel you were at a major disadvantage if you were hitched to one manufacturer or vendor based on your specification or other limits built into the design of your data center.”

“It’s only going to grow harder when the economy recovers and the rest of the globe reconnects, and other industries begin to consume the raw materials that we use to power our data centers,” he said. “To preserve our supply chains and keep these projects on schedule and on budget, we need to think in new ways and avoid these legacy ways of thinking.”

The Orange County Convention Center in Orlando, Florida will host Data Center World this year from August 16 to 19.