High Performance Computing

Visualization Cluster

High Performance Cluster with Viz-Wall

Kestrel is a 32-node CPU/GPU cluster acquired through a National Science Foundation Major Research Instrumentation grant (Award # 1229709).  It is available to researchers at Boise State University, including students and graduate students affiliated with a research project.

The faculty System Architect for this HPC resource is Micron School of Materials Science and Engineering Professor Eric Jankowski.

Kestrel Features

Parallel Computing on CPU and GPU

Parallel Rendering

Tile Display Visualization

Node Configuration

CPUs2 Intel Xeon E5-2600 series processors – 16 cores/16 threads

GPUs – 2-Tesla K20 (nodes 1 -22) 2-Quadro K5200 (nodes 23-32)

RAM – 64GB of memory

Storage – 400 gigabyte local scratch area

Interconnect – Mellanox ConnectX-3FDR Infiniband interconnect

Storage system – 140 TB Panasas Parallel File Storage

Parallel File Storage

The parallel file system for the Kestrel cluster is Panasas ActivStor14. There are two volumes accessible to users on the Panasas storage: /home and /cm/data1. Access to /cm/data1 is available upon request.

Scratch & Home Directories

Scratch Directory:

Users have personal “scratch” space (/scratch directory) to use for job submissions. Scratch space is where PBS Pro jobs should be submitted to the job scheduler.

PLEASE NOTE: User scratch space IS NOT backed up. Data on this partition will be lost if deleted, overwritten, or if there is catastrophic hardware failure.

Home Directory:

Users have home directory space (/home directory) to save important files. Users are responsible to move their own files from scratch space back to their home directory if the information needs to be backed up.

Any items that need to be backed up must be saved within home directories. Home directory space is managed with quotas.


Kestrel is NOT a supercomputing center architecture.  It cannot provide uninterrupted operation due to non-redundant cooling and power capabilities.

If you are interested in supercomputing services, please review these links: