The idea of Open Compute Project has been adopted years before it went public. High-tech companies like Google, Amazon, and Facebook build their own data centers from commodity components. But in 2011, Facebook decided to take this backstage process to the public in order to benefit from the suggestions that would help in enhancing their data centers and reduce the costs. At the same time, other companies could also purchase these newly fabricated ideas and in turn benefit as well.
The open-source custom-made motherboards and subsystems that were created by Facebook (and other high-tech companies) were originally designed specifically for large, hyper scale world of data centers. Due to a “Do It Yourself” hardware environment (mentioned as DIY), these designs could be easily implemented in small, medium, and large enterprises and as a result could be sold at a much cheaper price.
The goal of this project is to build one of the most efficient computing infrastructures at the lowest possible cost by custom designing and building the software, servers, and data centers from the ground up and then sharing these technologies as they evolve. Today’s purchased data centers use a large number of inexpensive general-purpose servers.
For example, when trying to purchase a Cisco server, we are limited to optimizing the specs due to the restrictions set by Cisco, like the number of hard drives or CPU. In addition, every server that is sold has its own power supply which causes a lot of data centers to convert power three or four times (from 200 to the UPS then back out of the UPS into a power distribution unit and into the server’s power supply) which causes a loss of 2 to 5 percent of the power every time it is stepped down . And not to mention the power loss and the excessive power cooling for all these servers. The OCP community has worked with other open source software projects to successfully develop energy-efficient servers with a 100% air-side economizer and evaporative cooling system to support the servers and simple screw-less server chassis with an integrated DC/AC power distribution scheme explained later-on . These servers are 38% more efficient and 24% less expensive.
Figure 1: Compute sled left bare of accessories and useless space 
- General-purpose compute sleds: motherboards filled in with CPUs, memory, and PCI cards for certain tasks
- Storage sleds: high-density disk arrays
- Memory sleds: systems with great quantities of RAM and reduced-power processors designed for handling large in-memory databases and indexing
Figure 2: Sleds that can be yanked out, sharing a common power supply and cooling system 
Figure 3: Storage drawer filled with drives, specifically designed for high capacity 
What happens with Cisco, HP, IBM and other big players?
Check the video below for an introduction to their project: