Editor’s note: Orange Silicon Valley Intern Benoit Verkindt contributed to this project and post.
Blockchain technology, which forms the backbones of cryptocurrencies such as Bitcoin and Ethereum, depends on a global network of computation resources. With interest in both on the rise in 2017, those resources have faced increasing pressure to perform. In this new gold-rush, it’s all about silicon.
Ethereum, which is currently the fastest growing cryptocurrency, can be mined efficiently with commodity GPUs, making it inherently more democratic, since it was designed to resist succumbing to high-powered surges in activity from application-specific integrated circuits (ASICs). By disallowing a competitive edge to a few large, ASIC-based mining data centers, Ethereum may be appealing to developers looking for a truly decentralized and fairly distributed crypto currency network. Put another way, the fact that Ethereum and other cryptocurrencies have been designed to be inherently ASIC-resistant — and open to mining with commercially available CPU silicon — is seen as a key differentiator from Bitcoin.
That brings us to the current state of Ethereum mining.
The global digital gold rush to mine Ethereum intensified in 2017, when a rising tide of digital miners added $100 million worth of high-performance GPUs to the global decentralized network in less than month, according to Mitch Steves, an analyst at RBC Capital. This is the first time in the history of computing that such a decentralized network has emerged — and for high-performance computing enthusiasts, this is really exciting! That enthusiasm motivated the team I worked with to build the most efficient server we could construct for the purpose of enabling Ethereum (and other cryptocurrency) miners around the planet.
For context, the team I work with at Orange Silicon Valley has focused on Performance Critical Computing with the specific goal of accelerating AI and big data at the edge for both terrestrial and airborne (embedded) use cases, including environments constrained by size, weight, and power. Given our focus on performance at the edge, it was natural to add blockchain technology as an accelerant to fuel growth of this giant, distributed, global supercomputer. We entered into the mushrooming global blockchain infrastructure by focusing on one basic building block — a single mining server.
In all crypto-mining rigs — DIY or factory made — the GPUs are the key to revenue generation, which takes the form of crypto coins. But they need their own support system: the lower-end processors on the server motherboard, cabling, and power supply are necessary components, but don’t affect mining. An individual server’s mining capacity is 100% GPU-bound: the more GPUs the server can handle, the higher the return on investment will be for the platform. All miners are sensitive to the payback time for their rigs. Our objective was to push the limits for the maximum number of GPUs we could accommodate in a single rackmount server without any modification of the cooling system for the GPUs.
As is often the case in the race for speed, less turned out to be more.
To test the limits of cost vs. performance efficiency we used GPUs specialized for mining, opting for the NVIDIA P-106-100 built by MSI. These GPUs use the same silicon as the GTX 1060, but importantly, the NVIDIA P-106-100 lacks a display port as seen on the GTX 1060 and other similar consumer gaming cards from NVIDIA. This deficit turned out to be a great advantage for us, because GPUs with graphics display ports running more than 12-13 cards can exhaust resources. Minus the (unnecessary) display ports on the NVIDIA GPUs, our server’s Ubuntu (headless) 16.04 LTS operating system was able to support 20 GPUs as PCIe endpoints.
The path to Ubuntu was another example of less is more: each PCIe card is designed to provide 20 MH/s for Ethereum mining without any overclocking. In a Windows 10 Pro environment, however, overclocking was possible and allowed us to achieve 24 MH/s per card. The gotcha here is that Windows did not support more than 8 GPUs. Thus, we decided to switch to Ubuntu 16.04 LTS. With the 7RU KLIMAX 210S platform from our partners at CocoLink and 20 NVIDIA P-106-100 cards (made by ASUS), we were able to boot the system with all the 20 GPUs working in parallel.
“We have been working on designing the system architecture for long time and finally developed the only system in the industry capable of accommodating 20 double width GPUs in a single server under a single PCIe root complex,” said Dong-Hak Lee, CEO of CoCoLink Corp., which is a subsidiary of Seoul National University. “This server was originally designed for DeepLearning and other HPC workloads. We are working on modifying this design to make it as cost effective as possible for all the crypto miners without compromising performance and reliability.”
Here is a screen capture of a Claymore Ethereum miner with 20 GPUs running in parallel. Each GPU — without any overclocking — draws around an average of 100 watts with a 100% utilization and generates around the range of 20-21 MH/s.
By 20 GPUs in a single server we were able to exceed 420MH/s and consume less than 2.2 kilowatts. This makes it the most efficient mining server available in the industry today. On a final note, we believe that more than 20 GPUs can be (theoretically) supported in a single system — but that would require some amendments to the existing design.