NVIDIA Tesla P100 Server Graphics Card

NVIDIA Tesla P100 Server Graphics Card

NVIDIA Tesla P100 Server Graphics Card

B&H # NVTP10016GB MFR # NVIDIA 900-2H400-0000-000
Special Order
Expected availability: 7-14 business days

Product Highlights

  • Pascal Architecture
  • 9.3 TFLOPS Single-Precision Performance
  • 18.7 TFLOPS Half-Precision Performance
  • 4.7 TFLOPS Double-Precision Performance
  • 16GB HBM2 vRAM
  • 4096-Bit Memory Interface
  • Passive Heatsink Cooling
  • PCI Express 3.0 x16 Interface
Show moreShow less

This item is noncancelable and nonreturnable.

$0.00 Tax Collected Outside NY and NJ

You Pay: $5,999.00

Add to Wish List Item in Wish List

NVIDIA Tesla P100 overview

  • 1Description

The NVIDIA Tesla P100 Server Graphics Card is designed for scientific and research applications. Its 3584 CUDA cores and 16GB of HBM2 vRAM linked via a 4096-bit interface provide performance to the order of 9.3 TFLOPS at single precision, 18.7 TFLOPS at half precision, and 4.7 TFLOPS at double precision. GPU design allows this card to perform tasks previously designated to large rackmount compute clusters. As this is a server GPU, no hardware outputs are offered. Additionally, the P100 uses a passive heatsink cooler. This cooler design uses no moving parts for increased reliability and reduced power consumption. However, the design is dependent on the server's internal airflow to cool the GPU.

Pascal Architecture
18.7 TeraFLOPS of FP16, 4.7 TeraFLOPS of double-precision, and 9.3 TeraFLOPS of single-precision performance powers new possibilities in deep learning and HPC workloads.
CoWoS HBM2
Compute and data are integrated on the same package using Chip-on-Wafer-on-Substrate with HBM2 technology for greater memory performance over the previous-generation architecture.
In the Box
NVIDIA Tesla P100 Server Graphics Card
  • Limited 3-Year Warranty
  • Table of Contents
    • 1Description

    NVIDIA Tesla P100 specs

    GPU NVIDIA Tesla P100:
    CUDA Cores: 3584
    Node Size: 16 nm
    Architecture: Pascal
    Interface Host: PCI Express 3.0 x16
    Memory Interface Width: 4096-bit
    Interface: HBM2
    Bandwidth: 732 GB/s
    Configuration: 16 GB
    Floating Point Half-Precision Performance: 18.7 TFLOPS
    Single-Precision Performance: 9.3 TFLOPS
    Double-Precision Performance: 4.7 TFLOPS
    APIs Compute:
    CUDA
    DirectCompute
    OpenCL
    OpenACC
    Form Factor Width: Single-slot
    Form Factor: Full-height, full-length
    Power Consumption 250 W
    Packaging Info
    Package Weight 3.3 lb
    Box Dimensions (LxWxH) 11.8 x 6.9 x 3.1"

    NVIDIA Tesla P100 reviews

    Be the first to review this item

    NVIDIA Tesla P100 Q&A

    NVIDIA Tesla P100 accessories

    See any errors on this page? Let us know.