The primary computing resource at CBICA is a RedHat Enterprise Linux-based HPC cluster.
The cluster is used for interactive work and software development, as well as batch processing of image data. The majority of image processing jobs in the lab use the SoGE queuing system to efficiently manage resources for both parallel and serial computing jobs.
The cluster has 7 nodes for interactive work, providing a total of 80 CPU-cores and 184GB of RAM. These nodes are used for software development, visualization of results, and submission of jobs to the batch computing nodes.
The batch resources of the CBICA cluster consist of 64 nodes, providing a total of 916 CPUs with 1364 hyper-threaded cores and 9.9TB of RAM. In support of applications that require large amounts of memory, 18 nodes have 384GB of RAM each. Five of the compute nodes include NVidia GPU processors (Titan X Pascal, K10, M2090, C1060), making 14576 cores and 43GB of RAM available for GPU-enabled applications.
Compute nodes communicate directly via a 10Gb/sec network. Individual workstations are connected to the cluster backbone via Gigabit Ethernet.
The servers and desktop workstations within SBIA have access to approximately 100TB disk space on a fibre-optical Storage Area Network (SAN).
Seven highly-available infrastructure machines provide backups of server data, a source code revision repository, an internal network information system, the CBICA public website, and internal documentation systems.