GPU System

The cluster hardware is currently housed in the Bioinformatics building, located at 24 Cummington St — see this on BU Maps.

The GPU clusters are called “BUNGEE” (Boston University Networked Gpu Experimental Environment) and “BUDGE” (Boston University Distributed Gpu Environment), and they are run via the Engineering grid engine. Please read the instructions for using the ENG Grid Engine,  as well as GPU-specific and MPI-specific instructions and examples.

Current Hardware Specifications
(The clusters are constantly growing as new hardware is acquired!)

BUNGEE:

  • 16 nodes (plus 1 head node), each with:
  • 2 quad-core Nehalems for a total of 8 cores per node, Intel Xeon E5530  at 2.40GHz
  • 24GB of  RAM per node
  • 1TB SATA hard disk per node
  • 2 NVIDIA Tesla Fermi C2075’s in each of 8 nodes (for a total of 16 C2075’s)
  • 2 NVIDIA Tesla Fermi C2070’s in each of the other 8 nodes (for a total of 16 C2070’s)

BUDGE:

  • Non-InfiniBand setup currently containing 3 very dense nodes:
  • 2 nodes, each with 8 NVIDIA Tesla M1060’s and identical CPU/RAM configuration to BUNGEE
  • 1 node, with 8 NVIDIA Tesla Fermi M2090’s and two 6-core Xeon X5670’s at 2.93GHz, with 96G of RAM

Networking:

  • 1 Mellanox  MT26428 QDR InfiniBand controller per card
  • 1 34-port Mellanox QDR InfiniBand switch
  • 1 24-port Cisco 3650G Gigabit Ethernet switch (for frontend ethernet)
  • 1 36-port 3Com SuperStack II 3C (for management network)

Disk subsystem:

  • Network Appliance Filer 3020 with tremendous amounts of disk, mounted over gigabit ethernet

Operating System:

  • BU Linux (CentOS 5.7)