
Graphics cards are likely the unsung heroes of computational advancement in recent years. While CPU (Central Processing Unit) performance has been improving at a very gradual rate from generation to generation, GPU (Graphics Processing Unit) performance has improved by marked leaps and bounds. Graphics cards from a few years ago pale in comparison to the sheer power of current models. While this article is not primarily a marketing discussion, much of this improvement can be attributed to intense competition between the two major manufacturers, AMD and NVIDIA, in addition to the multi-billion-dollar gaming market. This market has a huge international consumer base that constantly hungers for the latest technologies (for example, VR) and most powerful gear available to power those technologies; providing the necessary market capital for GPU manufacturers to push out significant hardware improvements on a regular basis.
Application Support and Reliability
So, how does this affect us pro video folk? Many of us in the professional markets know about the Quadro and FirePro monikers assigned to NVIDA’s and AMD’s workstation lines, respectively. It is worth noting that AMD has gone through some naming changes, starting with the long defunct FireGL branding. The FirePro name is presently being phased out in favor of the slightly more confusing Radeon Pro label. For those not familiar, Radeon has been the long-running gaming brand name of AMD, with its origins in the ATI company from before they were fully acquired by AMD in 2006. The cards in these Quadro and FirePro/Radeon Pro workstation lines are generally expensive and sometimes only fit specific applications. What’s more is that the corresponding consumer cards in NVIDIA’s and AMD’s GeForce and Radeon product lines offer very similar, if not better, hardware specifications for significantly less cash.
For example, take AMD’s Radeon Pro WX 7100 and Radeon RX 580 graphics cards. On the outside the cards do look a bit different, with the larger Radeon card housing three DisplayPort and one HDMI terminal on the output panel, as opposed to the four DisplayPort terminals on the Radeon Pro card, and the slighter Radeon Pro card featuring a sync connector and fewer power pin connections than the Radeon variant. Spec-wise, both cards are based on the same Polaris chip with 2304 stream processors (GPU cores), both have 8GB of vRAM and the same memory interface width of 256 bits. So, if not for a couple of minor differences in the physical interfaces, what separates these cards that warrants such a price difference? I hinted at it above but, to put it simply, the main differences are application support and reliability.
While supply and demand does have a part in the pricing scheme for workstation hardware in general, workstation GPUs differentiate themselves through application support. Many scientific, CAD (Computer-Aided Design), oil surveying, architectural, and other design applications that require precise rendering or number-crunching will only be certified to work with workstation-grade graphics cards. Some applications will go so far as to lock out the user if a certified GPU is not being used. Research and development by the software companies in conjunction with NVIDIA and AMD costs money, and that cost does get passed down to the end-user through the price. The reliability aspect is twofold, because it applies to the reliability of the hardware itself, and to the reliability of the computational results produced by the GPU. While on paper, the workstation graphics cards would put up similar performance numbers to similar consumer variants, those statistics don’t tell the whole story. Workstation GPUs are built using the cream of the crop components and as far as hardware yields are concerned, this cherry-picked hardware is the best coming off the production lines, ensuring consistently reliable performance over the life of the product. The memory used for the vRAM is also of the ECC (Error Correcting Code) variety. Sometimes, minute mathematical errors crop up in the millions of calculations. ECC corrects for those errors ensuring that output calculations are accurate and consistent so as to not corrupt data sets. Such correction is not necessary in consumer graphics cards.
Conclusion
All these factors, supply and demand, extensive R&D, and hardware tolerance, contribute in the increased cost of workstation cards. If you need one—you know who you are—and, like the majority of professional equipment, the investment pays off with the work that it does. An interesting wrinkle is that the consumer market is very much responsible for the advancement in graphics card hardware development. New technologies just coming to market, such as HBM (High-Bandwidth Memory), can be directly attributed to the extensive gaming market, and through the symbiotic relationship with professional hardware, product improvement in the workstation segment is guaranteed.
0 Comments