NVIDIA Tesla P100 Server Graphics Card

BH #NVTP10016GB • MFR #NVIDIA 900-2H400-0000-000
NVIDIA
NVIDIA Tesla P100 Server Graphics Card
Key Features
  • Pascal Architecture
  • 9.3 TFLOPS Single-Precision Performance
  • 18.7 TFLOPS Half-Precision Performance
  • 4.7 TFLOPS Double-Precision Performance
The NVIDIA Tesla P100 Server Graphics Card is designed for scientific and research applications. Its 3584 CUDA cores and 16GB of HBM2 vRAM linked via a 4096-bit interface provide performance to the order of 9.3 TFLOPS at single precision, 18.7 TFLOPS at half precision, and 4.7 TFLOPS at double precision. GPU design allows this card to perform tasks previously designated to large rackmount compute clusters. As this is a server GPU, no hardware outputs are offered. Additionally, the P100 uses a passive heatsink cooler. This cooler design uses no moving parts for increased reliability and reduced power consumption. However, the design is dependent on the server's internal airflow to cool the GPU.
Special Order
Expected availability: 7-14 business days
$5,999.00

Important Notice

  • This item is noncancelable and nonreturnable.

NVIDIA Tesla P100 Server Graphics Card

Savings Available:

Victor R., B&H Expert

True Know-How

Ask Our Experts

800.606.6969

NVIDIA Tesla P100 Overview

The NVIDIA Tesla P100 Server Graphics Card is designed for scientific and research applications. Its 3584 CUDA cores and 16GB of HBM2 vRAM linked via a 4096-bit interface provide performance to the order of 9.3 TFLOPS at single precision, 18.7 TFLOPS at half precision, and 4.7 TFLOPS at double precision. GPU design allows this card to perform tasks previously designated to large rackmount compute clusters. As this is a server GPU, no hardware outputs are offered. Additionally, the P100 uses a passive heatsink cooler. This cooler design uses no moving parts for increased reliability and reduced power consumption. However, the design is dependent on the server's internal airflow to cool the GPU.

Pascal Architecture

18.7 TeraFLOPS of FP16, 4.7 TeraFLOPS of double-precision, and 9.3 TeraFLOPS of single-precision performance powers new possibilities in deep learning and HPC workloads.

CoWoS HBM2

Compute and data are integrated on the same package using Chip-on-Wafer-on-Substrate with HBM2 technology for greater memory performance over the previous-generation architecture.

NVIDIA Tesla P100 Specs

GPUNVIDIA Tesla P100:
CUDA Cores: 3584
Node Size: 16 nm
Architecture: Pascal
InterfaceHost: PCI Express 3.0 x16
MemoryInterface Width: 4096-bit
Interface: HBM2
Bandwidth: 732 GB/s
Configuration: 16 GB
Floating PointHalf-Precision Performance: 18.7 TFLOPS
Single-Precision Performance: 9.3 TFLOPS
Double-Precision Performance: 4.7 TFLOPS
APIsCompute:
CUDA
DirectCompute
OpenCL
OpenACC
Form FactorWidth: Single-slot
Form Factor: Full-height, full-length
Power Consumption250 W
Packaging Info
Package Weight3.3 lb
Box Dimensions (LxWxH)11.8 x 6.9 x 3.1"
See any errors on this page? Let us know

YOUR RECENTLY VIEWED ITEMS

Browsing History

Close

Close

Close