The following text can be inserted in any grant or other documentation in which a description of the PMACS HPC services is required. This text also provides a summary of the hardware and environment of the system. Please note: Unless specifically requested by the user, no HPC disk systems are backed up.
The PennHPC facility opened in April of 2013 to meet the increasing growth in genomics processing and storage, as well as growth in other scientific areas requiring computational capacity such as imaging and biostatistics/bioinformatics. The cluster is managed by two fulltime system administrators, and is located at the Philadelphia Technology Park, a Tier-3, SSAE 16/SAS 70 Type II Audit compliant colocation/datacenter facility. The 144 IBM iDataPlex cluster nodes each house two eight-core Intel E5-2665 2.4Ghz Xeon Processors, 192 or 256 Gigabytes of RAM, and a 500 Gigabyte local hard drive. These nodes are sub-divided into virtual processing cores, with the capability to provision up to 5,100 virtual cores at 6GB of RAM per virtual core. Cluster and storage interconnection is provided by a 40Gbps Ethernet fabric with 10Gbps to each node. The cluster nodes are attached to 1.8 Petabytes of IBM Storwise V7000 disk storage, housed in two separate performance tiers (no backup). The disk is presented to the compute nodes via an ten-node IBM Scale out Network Attached Storage (SONAS) system leveraging the IBM General Parallel File System (GPFS). Computational job scheduling/queuing and cluster management is orchestrated by the IBM Platform Computing (LSF) suite of products. Long-term active archiving of data is available via a SpectraLogic T950 tape library, housing 1910 LTO-5 tapes, 290 LTO-6 tapes, and 12 LTO-6 drives with a total raw capacity of 3.6 Petabytes of storage. Each archive tape is mirrored, providing redundancy in the event of a tape failure, bringing the useable archive capacity to 1.8 Petabytes. Tape library and archive management are provided by the SGI StorHouse product, which allows the tape archive to be presented as a simple file share while providing robust data protection and verification processes in the background.
The PennHPC baseline cost structure is fee-for-service based around the service-center model. Costs are allocated in a model that strives for cost recapture and budget neutrality, in that all operating costs are covered by usage fees, with no retained monies year-over-year. Costs as of 4/30/2014 are:
- $0.035/computational/vCore slot hour
- $0.055/GB/month for disk usage
- $0.015/GB/month of active archive mirrored tape storage
- $95/hour for consulting services (excludes account setup)
- No charges to maintain an account; charges are billed on an as-consumed basis only.
The system is managed by Penn Medicine Academic Computing Services, the central IS/IT department for the Perelman School of Medicine.
Current staff members are:
- Brian Wells, Associate Vice-President, Health Technology & Academic Computing
- Andre Jenkins, Senior Director, Penn Medicine Academic Computing Services (PMACS)
- Jim Kaylor, Interim Director, Enterprise Research Applications (ERA), PMACS
- Rikki Godshall, Enterprise Architect, ERA/PMACS
- Anand Srinivasan, Project Lead, ERA/PMACS
Former ERA Team members:
- Jason Hughes