The Penn Medicine Academic Computing Services High-Performance Computing (Penn HPC) environment is available to faculty, staff, and students of the University of Pennsylvania and other non-profit organizations for the purpose of providing computational capacity, high-performance storage, and long-term archiving of large data sets.
The PMACS HPC system houses the following resources for our clients’ use:
- 19 Dell C6420 Quad node systems (76 compute nodes/6080 cores total)
- All nodes run CentOS v7.8
- 80 CPU cores per node (with hyper-threading turned on)
- 256 GB or 512 GB of RAM per node
- 56 GB/s EDR or 100 GB/s FDR InfiniBand connection to the HPC filesystem
- All nodes have 10Gb/s Ethernet connections
- 1 Dell R940 big memory system with 1.5 TB RAM, 96 CPU cores, 10 GB/s Ethernet and 100 GB/s FDR InfiniBand connection to HPC filesystem
- 2 GPU nodes each with 1x Nvidia Tesla P100 GPU card, 512 GB RAM, 88 CPU cores, 10 GB/s Ethernet and 100 GB/s FDR InfiniBand connection to HPC filesystem
- 4.2 Petabytes of IBM Spectrum Scale (GPFS) Disk Storage (2 tiers, no backup)
- 1.3 Petabytes of mirrored archive tape storage
- Platform LSF job scheduling system
Please see our “The Basics” link on the left navigation bar to get started. Included are instructions for requesting an account, billing and pricing details, acceptable use documentation, service level agreements (SLA’s), and a Frequently Asked Questions (FAQ) page.
Please contact us with any questions at firstname.lastname@example.org.