My rig
With inexpensive GPUs in the cloud, there isn’t much of a need to spend thousands of dollars on a fully-loaded machine with a local GPU. That being said, having such a machine handy does not hurt.
I’m fortunate enough to be able to develop on a Dell Precision 7520 with an i7 CPU @ 2.9GHz, 32GB of RAM and a SSD. Most importantly, it has a Nvidia M2200 GPU. It’s not a Titan Xp, but after loading up CUDA, cuDNN and Tensorflow 1.3, I’m finding it performs about 11% faster than a EC2 g2.2xlarge. The g2 GPUs all have 1536 CUDA cores and 4 GB RAM, compared to my laptop which has 1024 CUDA cores and 4 GB RAM, which explains why my laptop performed on par with the single GPU g2.2xlarge. I’m not sure why, however, it would perform slightly faster. PassMark Software has a really good GPU Compute Benchmark Chart which I found handy.
Under no circumstances am I suggesting anyone drop $3k on a rig, but if it’s there then running small tests locally is pretty convenient.
Update: 6 October 2018 - I’m not suggesting that anyone do this, but the AI Engineer and Entrepreneur Jeff Chen wrote a popular blog post describing how building your own deep learning computer is 10x cheaper than AWS. An expandable computer with one top-end GPU (a 1080 Ti GPU) can be had for $3k, which depreciated over 3 years makes it substantially less expensive than AWS or Google Cloud. According to Jeff, “Nvidia contractually prohibits the use of GeForce and Titan cards in datacenters. So Amazon and other providers have to use the $8,500 datacenter version of the GPUs, and they have to charge a lot for renting it. This is customer segmentation at its finest folks!”
Leave a Comment