NVIDIA Tesla M-Class GPU M2090 Computing Module

NVIDIA Tesla M-Class GPU M2090 Computing Module

NVIDIA Tesla M-Class GPU M2090 Computing Module

B&H # NVM20906PS MFR # 900-21030-0040-100
NVIDIA Tesla M-Class GPU M2090 Computing Module

Show MoreLess

Close

Now Viewing:

Important Notice: This item is non-cancelable and non-returnable
Special Order
Expected availability: 3-7 business days 
Free Standard Shipping
Not eligible for Free Expedited Shipping

Product Highlights

  • Based on Fermi CUDA Architecture
  • 512 CUDA Cores
  • 6 GB GDDR5 Memory
  • 177 GB/s Memory Bandwidth
  • ECC Memory Error Protection
  • System Monitoring Features
  • L1 and L2 Caches
  • NVIDIA Giga-thread Engine
  • Support of Programming Languages
Show moreShow less
$0.00 Tax Collected Outside NY

You Pay: $599.00

Add to Wish List Item in Wish List
  • 1Description

The Tesla M-Class GPU M2090 Computing Module from Nvidia is based on the CUDA architecture code named "Fermi". The Tesla GPU computing module is fast parallel computing processor for high performance computing (HPC). Tesla GPU's high performance makes it ideal for seismic processing, biochemistry simulations, weather and climate modeling, signal processing, computational finance, CAE, CFD, and data analytics.

Accelerate the science with NVIDIA Tesla 20-series GPU. A companion processor to the CPU in the server, Tesla GPU speeds up HPC applications by 10 x. Based on the Fermi architecture, this GPU features up to 665 gigaflops of double precision performance, 1 teraflop of single precision performance, ECC memory error protection, and L1 and L2 caches.

The Tesla M-class GPU module is integrated into GPU-CPU servers from OEM. This gives data center IT staff much greater choice in how to deploy GPU, with a wide variety of rackmount and blade systems and with remote monitoring and management capabilities, while enabling large data center, scale-out deployments.

Hundreds of CUDA Cores
Delivers up to 665 Gigaflops of double-precision peak performance in each GPU and enables servers from leading OEMs to deliver more than a teraflop of double-precision performance per 1 RU of space. Single precision peak performance is over one Teraflop per GPU.
ECC Memory Error Protection
Meets the critical requirement for computing accuracy and reliability in datacenters and supercomputing centers. Internal register files, L1/L2 caches, shared memory, and external DRAM all are ECC protected.
UP to 6GB Of GDDR5 Memory Per GPU
Maximizes performance and reduces data transfers by keeping larger data sets in local memory that is attached directly to the GPU.
System Monitoring Features
Integrates the GPU subsystem with the host system's monitoring and management capabilities such as IPMI or OEM-proprietary tools. IT staff can thus manage the GPU processors in the computing system using widely used cluster/grid management solutions.
L1 and L2 Caches as Part of the NVIDIA Parallel Data-cache
Accelerates algorithms such as physics solvers, ray-tracing and sparse matrix multiplication where data addresses are not known beforehand.
NVIDIA Giga-thread Engine
Maximizes the throughput by faster context switching that is 10 times faster than previous architecture, concurrent kernel execution, and improved thread block scheduling.
Asynchronous Transfer with Dual DMA Engines
Turbo-charges system performance by transferring data over the PCIe bus while the computing cores are crunching other data.
Flexible Programming Environment with Broad Support of Programming Languages and APIS
Choose C, C++, OpenCL, DirectCompute, or Fortran to express application parallelism and take advantage of the innovative "Fermi" architecture.
Table of Contents
  • 1Description
Floating Point Peak Double Precision: 665 Gigaflops
Peak Single Precision: 1331 Gigaflops
Chip T20A GPU
Processor Clock 1.3 GHz
CUDA Cores 512
Memory 6 GB GDDR5
Memory Clock Memory Clock: 1.85 GHz
Memory I/o: 384-bit GDDR5
Memory Configuration 24 pcs 128 M x 16 GDDR5 SDRAM
Internal Connectors 8-pin PCIe power connector
6-pin PCIe power connector
Board Power <= 225 W
Supported Operating System Linux and Windows 64-bit
Memory Bandwidth 177 GB/s (ECC off)

With ECC on, 12.5% of the GPU memory is used for ECC bits. So for example, 3 GB total memory yields 2.625 GB of user available memory with ECC on.

Cooling Thermal Cooling Solution: Passive heat sink
Packaging Info
Package Weight 2.4 lb
Box Dimensions (LxWxH) 11.8 x 7.1 x 3.1"

NVIDIA Tesla M-Class GPU M2090 Computing Module Review

Be the first to review this item
See any errors on this page? Let us know.