For the latest COVID-19 campus news and resources, visit umassmed.edu/coronavirus.

Search Close Search
Page Menu

High Performance Computing

HPCbanner 

High Performance Computing (HPC) uses distributed computational cycles to decrease the amount of time a single job would take. HPC processing jobs typically consist of searching or time and process jobs. Processing string searching for genomic data comparisons assists with the speed-up of “needle” and “haystack” processing and analysis

Researchers utilize custom and open software to analyze, distribute, and calculate large data sets. Utilizing best practices users can decrease job run time several fold by using distribution and cluster related protocols such as MPI.

Examples of HPC distribution include: Monte Carlo computations, time and space computations, and string (over DNA, etc.) matching algorithms. Users can create programs, and scripts, on the cluster here at UMASS for pattern matching and general search based needs; for example using HG18 we can create simple shell scrip(s) using the Perl scripting language for effective pattern matching. We, ARCS, can assist with these needs and help you create optimal routines and searing as needed.

Resources


Priority Queues for faculty/labs.

  • PI or Campus purchases equipment with agreement of 5-year life span.
  • PI retains access to the 15,000+ core shared cluster.
  • PI has priority queue on purchased equipment.
  • Jobs from short queue (<=4 hrs) are allowed to backfill when priority queues are idle. Backfilled jobs will not be preempted (comparable to the large queue).
  • Electricity, Rent, Software and Hardware maintenance, and cluster administration are all included in the capital purchase cost.
  • All software modules available to the Shared Cluster will be available to your Priority Queue cluster nodes


How to procure a Priority Queue.

  1. Contact either:
  2. Give your hardware requirements (cores, memory, storage) to the MGHPPC Support team
    • HPC team will prepare a quote for you & order the hardware for you, using UMass’ deep discounts with DELL and Lenovo.
    • HPC team will rack and administer your hardware for you.
  3. Sign a Memorandum of Understanding. The key conditions are:
    • Your hardware will be attached to the Shared Cluster.
    • A new queue will be created giving your team priority access to those cores.
    • Your hardware will be removed in five years.

The HPC environment runs the IBM LSF scheduling software for job management. The High Performance Computing Cluster (HPCC) consists of the following hardware:

Networking:

  • EDR/FDR based Infiniband (IB) network
  • 10 Gigabit Ethernet network for the storage environment

Storage:

  • Dell/EMC high performance Isilon storage with 1,585TB of Isilon H500
  • Dell/EMC high performance Isilon storage with 2,457 TB of NL410 
  • Dell/EMC high performance Isilon storage with 162 TB of NL400

Data storage and pricing details

Computing:

# of Nodes
Cores per Node
CPU
Memory per node
GPU
Total Cores
3
32
Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz
192G
3x Tesla V100 devices per node
ncc=7.0
96
6
20
Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz
256G
4x Tesla K80 devices per node
ncc=3.7
120
3
16
Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
256G
2x Tesla C2075 devices per node
ncc=2.0
48
120
4
Intel(R) Core(TM) i7 CPU 950 @ 3.07GHz
8-18G
1x GeForce GTX 560 Ti device per node
ncc=2.1
480
80
20
Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz
128G
N/A
N/A
1600
40
64
AMD Opteron(tm) Processor 6278 @ 2.4GHz
512G
N/A
N/A
2560
24
64
AMD Opteron(tm) Processor 6378 @ 2.4GHz
512G
N/A
N/A
1536
88
64
AMD Opteron(tm) Processor 6378 @ 2.5GHz
512G
N/A
N/A
5632
32
16
Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz
192G
N/A
N/A
512
16
20
Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz
128G
N/A
N/A
320
32
36
Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz
192G
N/A
N/A
1152

Scheduling Software:

  • The HPC environment runs the IBM LSF scheduling software for job management.

MGHPCC Facility:

  • The Massachusetts Green High Performance Computer Center (MGHPCC) facility has space, power, and cooling capacity to support 680 racks of computing and storage equipment, drawing up to 15MW of power. 
  • High speed network connections to the facility are available via dark fiber, the Northern Crossroads, and Internet 2. 
  • The MGHPCC facility has been awarded LEED Platinum status.

Resources:

MGHPCC Web Site: Official web site of the Massachusetts Green High Performance Computing Center

UMass HPCC Wiki: Detailed information for users of the UMass Shared HPC Cluster

Account Request: Form to request an account on the UMass Shared HPC Cluster

ghpcc-discsussion list: Email discussion list for users of the UMass Shared HPC Cluster

Request Technical Support

HPC Accounts and Training:

If you are interested in an HPC account, please use this link to request access. Individual support for cluster usage is available.  For more information please contact hpcc-support@umassmed.edu.