NVIDIA Tesla P100 Server Graphics Card

BH #NVTP10016GB • MFR #NVIDIA 900-2H400-0000-000
NVIDIA
NVIDIA Tesla P100 Server Graphics Card
Key Features
  • Pascal Architecture
  • 9.3 TFLOPS Single-Precision Performance
  • 18.7 TFLOPS Half-Precision Performance
  • 4.7 TFLOPS Double-Precision Performance
The NVIDIA Tesla P100 Server Graphics Card is designed for scientific and research applications. Its 3584 CUDA cores and 16GB of HBM2 vRAM linked via a 4096-bit interface provide performance to the order of 9.3 TFLOPS at single precision, 18.7 TFLOPS at half precision, and 4.7 TFLOPS at double precision. GPU design allows this card to perform tasks previously designated to large rackmount compute clusters. As this is a server GPU, no hardware outputs are offered. Additionally, the P100 uses a passive heatsink cooler. This cooler design uses no moving parts for increased reliability and reduced power consumption. However, the design is dependent on the server's internal airflow to cool the GPU.
Discontinued
Akiva M., B&H Computer Expert

True Know-How

Ask Our Experts

800.606.6969

NVIDIA Tesla P100 Overview

The NVIDIA Tesla P100 Server Graphics Card is designed for scientific and research applications. Its 3584 CUDA cores and 16GB of HBM2 vRAM linked via a 4096-bit interface provide performance to the order of 9.3 TFLOPS at single precision, 18.7 TFLOPS at half precision, and 4.7 TFLOPS at double precision. GPU design allows this card to perform tasks previously designated to large rackmount compute clusters. As this is a server GPU, no hardware outputs are offered. Additionally, the P100 uses a passive heatsink cooler. This cooler design uses no moving parts for increased reliability and reduced power consumption. However, the design is dependent on the server's internal airflow to cool the GPU.

Pascal Architecture

18.7 TeraFLOPS of FP16, 4.7 TeraFLOPS of double-precision, and 9.3 TeraFLOPS of single-precision performance powers new possibilities in deep learning and HPC workloads.

CoWoS HBM2

Compute and data are integrated on the same package using Chip-on-Wafer-on-Substrate with HBM2 technology for greater memory performance over the previous-generation architecture.

NVIDIA Tesla P100 Specs

GPU
GPU ModelNVIDIA Tesla P100
Stream Processors3584 CUDA Cores
Floating Point PerformanceHalf Precision Performance: 18.7 TFLOPS
Single Precision Performance: 9.3 TFLOPS
Double Precision Performance: 4.7 TFLOPS
InterfacePCI Express 3.0 x16
Supported APIsCUDA
DirectCompute
OpenCL
Memory
Memory Configuration16 GB
Memory InterfaceHBM2
Memory Interface Width4096-Bit
Memory Bandwidth732 GB/s
Power Requirements
Max Power Consumption250 W
Dimensions
WidthSingle-Slot
Packaging Info
Package Weight3.3 lb
Box Dimensions (LxWxH)11.8 x 6.9 x 3.1"
See any errors on this page? Let us know

YOUR RECENTLY VIEWED ITEMS

Close

Close

Close