Perelman School of Medicine at the University of Pennsylvania
Perelman School of Medicine at the University of Pennsylvania

High Performance Computing Penn Medicine Academic Computing Services

Grant Information

The following text can be inserted in any grant or other documentation in which a description of the PMACS HPC services is required.  This text also provides a summary of the hardware and environment of the system. Please note:  Unless specifically requested by the user, no HPC disk systems are backed up.  

PMACS HPC

The PennHPC facility opened in April of 2013 to meet the increasing growth in Genomics processing and storage, as well as growth in other scientific areas requiring computational capacity such as imaging and Bio-Statistics/Bioinformatics.  The cluster is managed by two full-time system administrators, and is located at the Philadelphia Technology Park, a Tier-3, SSAE 16/SAS 70 Type II Audit compliant co-location datacenter facility.

This cluster has 180 general purpose compute nodes, 2 big memory nodes and 2 GPU nodes. Of 36 of the 180 compute nodes are Dell C6420 Quad nodes (4 nodes per enclosure; 36 compute nodes total); the remaining 144 compute nodes are IBM iDataPlex nodes. Each Dell 6420 node has two 20-core Intel Xeon Gold 6148 2.40GHz CPU, between 256-512 GB RAM each, a single 56 Gb/s InfiniBand connection to the GPFS file system and a 1.6TB dedicated scratch space provided by either local a SSD or a NVMe drive, depending on node. Each of the IBM iDataPlex nodes has two eight-core Intel E5-2665 2.4Ghz Xeon Processors, 192 or 256 Gigabytes of RAM, and a 500 Gigabyte local hard drive. A Dell R940 system and a IBM x3850 system service the large memory workloads of the cluster users. The Dell R940 machine is configured with four 12-core Intel Xeon Gold 6126 2.60GHz CPUs, 1.5TB RAM, a single 56 Gb/s InfiniBand connection to the GPFS file system and a 1.6TB dedicated scratch space provided by a local NVMe drive. While the IBM x3850 system is configured with eight 8-core Intel E7-8837 2.6 GHz CPUs, 1.5TB RAM and 500GB of local storage. The two GPU nodes are each configured with two 22-core Intel Xeon E5-2699 v4 2.20GHz CPUs, 256GB RAM, a single Nvida Tesla P100 GPU card (3584 CUDA cores & 16GB RAM per card), a single 56 Gbps Infiniband connection to the GPFS file system and a 10 Gbps link to the rest of the cluster.

All the compute nodes are sub-divided into virtual processing cores, with the capability to provision up to 7,648 virtual cores at 3-6GB of RAM per virtual core.  Cluster and storage interconnection is provided by a 40Gbps core Ethernet fabric with a minimum of 10Gbps connectivity to each compute node and in addition a single 56Gbps Infiniband link on select nodes. 

The cluster nodes are attached to 4.2 Petabytes of shared IBM GPFS storage (no backup). The disk sub-system is presented to the compute nodes via an eight-node IBM GPFS manager cluster. Computational job scheduling/queuing and cluster management is orchestrated by the IBM Platform Computing (LSF) suite of products.  Long-term active archiving of data is available via a SpectraLogic T950 tape library, with a total current capacity of 1.2 Petabytes of storage and sufficient room for growth.  Each archive tape is mirrored, providing redundancy in the event of a tape failure. Tape library and archive management are provided by a Quantum product which allows the tape archive to be presented as a simple file share while providing robust data protection and verification processes in the background. 

PennHPC Costs 

The PennHPC baseline cost structure is fee-for-service based around the service-center model.  Costs are allocated in a model that strives for cost recapture and budget neutrality, in that all operating costs are covered by usage fees, with no retained monies year-over-year.  Costs as of 4/30/2014 are:

  • $0.035/computational/vCore slot hour
  • $0.055/GB/month for disk usage
  • $0.015/GB/month of active archive mirrored tape storage
  • $95/hour for consulting services (excludes account setup)
  • No charges to maintain an account; charges are billed on an as-consumed basis only. 

Current Administration

The system is managed by Penn Medicine Academic Computing Services, the central IS/IT department for the Perelman School of Medicine. 

Current staff members are:

  • Kash Patel, Associate VP Chief Digital Technology Officer, Corporate Information Services, Penn Medicine
  • Jim Kaylor, Director, Enterprise Research Applications (ERA), PMACS
  • Rikki Godshall, Manager, HPC and Cloud Services, ERA, PMACS
  • Anand Srinivasan, Sr. Project Leader, HPC and Cloud Services, ERA, PMACS