When I started putting together a server for testing a GPU accelerated database (MapD), I initially hoped I could get by by simply adding to my HP Z420 (8-core E5-2687W) workstation.
So I purchased:
- 128GB of DDR3 PC3-12800 RAM, cost: $650
- 11GB 1080Ti Video Card, cost: $725
My logic was that the video card had 3584 cuda cores, plenty enough to test high core count GPU data scans.
Of course real world testing often changes assumptions. I soon discovered my queries were running slower when using the GPU. Turns out, in order to get the benefit of GPU acceleration my “hot” data, meaning any data accessed by a query that I want to actually go faster, needs to reside entirely in GPU RAM. And of course my queries tend to be large. After all, any database can return results sub-second on small data. I needed to show what a GPU DB could do.
The experience changed my perspective on how to purchase GPU for a test system. I needed lots and lots of GPU RAM!
I returned the 1080Ti and went shopping for video cards on e-Bay.
Sampling of cards (pricing as of 1/7/2018):
|GTX 770 4GB||Kepler||$120||$30|
|Tesla K80 24GB||Kepler||$2100||$87.50|
|GTX 980 8GB||Maxwell||$400||$50|
|Titan Maxwell 12GB||Maxwell||$600||$50|
|1070 GTX 8GB||Pascal||$600||$50|
Notice the $50/gb price point. That was a common price for used cards, although brand new or pro cards tended to cost more.
In the end, I found a special card that had 16GB and cost $400 ($25/GB). I bought two of them. The cards are M40 Grid (not to be confused with the M40 Tesla). This model was never released to the public by Nvidia, so there is little demand for it on the used market today. However, its cost and memory density make it worth trying out.
For $800, or only $75 more than I paid for the 1080Ti, I now have space for 32GB of data, instead of 11GB.
Stay tuned for a later post as to how it performs.