Computational Clusters

Request HPC Resources

Faculty, staff and students at Â鶹´«Ã½ can request access to HPC resources. Anyone outside Â鶹´«Ã½ who collaborates with Â鶹´«Ã½ researchers can also request access with signoff by an Â鶹´«Ã½ researcher. For further information, contact itshelp@odu.edu.

Wahab Cluster

wahab.hpc.odu.edu

Funded through a grant from the National Science Foundation's Major Research Instrumentation Program, the Wahab cluster offers substantial computational power to critical research projects in disciplines including cybersecurity, resilience and data-intensive science and engineering. The system's speed and power make it possible to compute and analyze enormous amounts of data related to sea-level rise, natural disasters, brain imaging, cyberattack paths, and so much more.

The Wahab Research Environment is configured with 6320 computational cores and 72 NVidia V100 GPUs, with 158 compute nodes and 18 GPU nodes. Each compute node contains 40 cores at 2.4Gh and 368Gb RAM. All nodes are connected via a Mellanox Infiniband EDR backbone providing 100Gbps throughput. The GPU nodes each contain four NVidia V100 GPUs, and each V100 contains 640 tensor cores to facilitate deep learning research.

The cluster is open to all campus researchers and will serve the research computing needs of the students and faculty at Â鶹´«Ã½ for years to come.

National Science Foundation icon

The Wahab cluster is supported in part by National Science Foundation's grant CNS-1828593, ".

Cluster Hardware Types

Node Type Info Node Name Prefix
Standard Compute

158 Nodes Available

40 Slots (Cores) per Node (max)

368GB RAM per Node

d1-6420a-###
d4-6420b-###
d5-6420b-###
d6-6420b-###
GPU

18 Nodes Available

32 Slots (Cores) per Node (max)

Nvidia V100

192GB RAM per Node

d2-w4140a-###
d3-w4140b-###
d4-w4140b-###

File Storage Locations

Path Description Network Connection Backup Status
/scratch Working data area LUSTRE NO
/home User home directories NFS YES
/RC/home Per User Research storage directories NFS YES
/RC/group Per Group Research storage directories NFS YES

Turing Cluster

turing.hpc.odu.edu

The Turing cluster contains 258 multi-core compute nodes, each containing between 16 and 32 cores and 128 GB of RAM. The cluster also provides 7 high memory nodes containing between 512 and 768 GB of RAM, 21 GPU nodes each with one or more NVIDIA GPU's and 10 Intel Xeon Phi nodes each with 2 Xeon Phi MIC cards. An FDR inifiniband network provides fast cluster communication. The cluster is highly redundant with clustered head nodes and a dedicated login node. Users connect to the login node over aggregated 10 Gb connections.

The Turing cluster supports a variety of research including fluid dynamics, genomics, molecular dynamics and oceanographic research.

Cluster Hardware Types

Node Type Info Node Name Prefix
Standard Compute

220 Nodes Available

16-32 Slots (Cores) per Node (max)

128GB RAM per Node

coreV1-##-###
corev2-##-###
coreV3-##-###
coreV4-##-###
GPU

21 Nodes Available

28-32 Slots (Cores) per Node (max)

Nvidia K40, K80, P100, V100 GPU(s)

128GB RAM per Node

coreV3-23-k40-###
coreV4-21-k80-###
coreV4-22-p100-###
coreV4-24-v100-###
Xeon Phi

10 Nodes Available

20 Slots (Cores) per Node (max)

Intel 2250 Phi MICs

128GB RAM per Node

coreV2-25-knc-###
High Memory

7 Nodes Available

32 Slots (Cores) per Node (max)

512-768GB RAM per Node

coreV2-23-himem-###

File Storage Locations

Path Description Network Connection Backup Status
/home User home directories NFS YES
/RC/home Per User Research storage directories NFS YES
/RC/group Per Group Research storage directories NFS YES
/scratch-lustre Working data area LUSTRE NO

Hadoop Cluster

The hadoop cluster is made up of a resource manager, name node and 6 data nodes. The cluster has the following hardware resources available:

The HDFS file system comprises 7.71TB of total storage. The data nodes are connected over an FDR infiniband network for high speed internode communication and the name node and resource manager are connected via 10GB interfaces.

The name node handles the management of the HDFS file system while the resource manager schedules jobs on the data nodes.

Cluster Hardware Resources

Node Type Info Node Name Prefix
Resource Manager 1 Node Available Ìý
Name Node 1 Node Available Ìý
Data Node

6 Nodes Available

128GB RAM

3 x 400GB SSD Storage

Ìý

CCI-R Environment

CCI Research Environment

COVA CCI is southeastern Virginia's engine for research, innovation, and commercialization of next-generation cybersecurity technologies particularly in the areas of Cyber Physical Systems Security (CPSS), 5G, and Artificial Intelligence (AI) in the Maritime, Defense, and Transportation industries.

CCI Research Overview

The COVA CCI research environment is focused on advancing research in the coastal Virginia region. It consists of a private cloud with the ability to host research applications customized to specific research needs.

The advantages of using the research cloud over your typical workstation include:

  • Designed for research involving sensitive data types
  • Access to large compute capacity
  • Access to GPU's for intense machine learning workflows
  • Enterprise grade hardware and software for reliability
  • Located in enterprise data center to provide cushion for cooling and power issues.
  • Support for a variety of workloads including container orchestration and virtual machines

Cluster Hardware Types

Node Type Info
Standard Compute 20 Nodes Available
GPU

4 Nodes Available

52 Cores

192GB RAM

16 Nvidia V100 GPU's


RRCE Environment

RRCE Environment

The Regulated Research Computational Environment (RRCE) is built through a partnership between Â鶹´«Ã½, EVMS and Sentara. It provides a virtual lab environment for computation and storage of regulated data sets including CUI and HIPAA. Capabilities range from high-capacity storage to cutting edge GPUs to accelerate AI/ML research, data science and HPC. The environment is built on the TiCrypt software and provides a secure interface to transfer data and manage research projects in multiple security enclaves with disparate regulatory requirements.

Features of RRCE

  • High-speed connectivity between Â鶹´«Ã½, VMASC and EVMS
  • Hyperconverged virtual environment
  • 240 computation cores
  • 12 Nvidia A100 GPUs
  • Secure, segmented network infrastructure
  • NIST 800-171, and CMMC level 2 compliance

Types of protected data

  • General sensitive data (ex: business confidential)
  • Regulated sensitive data:
    • FERPA (educational data, e.g. student records)
    • Personal Health Information (regulated by HIPAA)
    • Controlled Unclassified Information (CUI, NIST 800-171)
    • ITAR data (export-controlled data)

Features

  • Linux and Windows virtual machines built on demand from golden images
  • Persistent, encrypted data storage and sharing via tiCrypt's Vault and Drives
  • Audit logging and alerting
  • Data labeling for security classification
  • Secure data ingestion via forms and sftp

Research Software

The software packages installed on the compute clusters is continuously changing with researchers needs.

Open Source Packages
  • Physics/chemistry: Espresso, gromacs
  • Biology: BLAST, Bioperl, migrate-n
  • Engineering: OpenFOAM
  • Statistics/math: R, Pandas, octave
  • Scripting: python, perl, ruby, Java, ROOT
  • Visualization: VTK, ParaView
Commercial Packages
  • Chemistry: Gaussian, Molpro, VASP, amber
  • Math: MATLAB
  • Biology: CLC Bio
  • Engineering/physics: COMSOL, ANSYS
  • Statistics/business: SAS
Software Development Tools & Libraries
  • Compilers: Intel, PGI, GNU (Fortran, C, C++), Clang, NVIDIA (CUDA)
  • Libraries: math (MKL, LAPACK, fftw3), machine learning (TensorFlow, PyTorch, Caffe, scikit-learn)
  • Parallel programming (MPI, Charm++)
  • Debuggers, profilers (GDB, TotalView, PAPI)

These are representative lists only. The complete list is much longer and keeps growing!
If there is software you need to use that is not on this list please contact us and we will install it for you.

Software Download

Login Tools

Windows:

  • Putty is a lightweight ssh client used for command line access to the clusters.
  • RDP is recommended for graphical access to cluster resources.Ìý

Mac/Linux:

  • Mac and Linux systems typically have an ssh client built in. You can also download the Microsoft RDP client for graphical access.
File Transfer Tools

We recommend to transfer files to and from the cluster. It's easy to use and works across multiple platforms. After you download and install the application, it needs to be configured to connect to the cluster (either Turing or Wahab).