Skip to content

Home

This is the documentation of the BIH high-performance compute (HPC), also called HPC 4 Research. The BIH HPC cluster is maintained by CUBI (Core Unit Bioinformatics).

This documentation is maintained by BIH CUBI and the user community. This is a living document that you can update and add to. See How-To: Contribute to this Document for details.

⬅ The global table of contents is on the left, it is on the right for the current page ➡.

Connecting to the Cluster

  • Web Access: https://hpc-portal.cubi.bihealth.org
  • SSH-Based Access:

    # Interactive Login as Charite/MDC user
    ssh -l user_c hpc-login-1.cubi.bihealth.org  # OR login-2...
    ssh -l user_m hpc-login-1.cubi.bihealth.org  # OR login-2...
    # File Transfer as Charite/MDC user
    sftp user_c@hpc-transfer-1.cubi.bihealth.org  # OR hpc-transfer-2...
    sftp user_m@hpc-transfer-1.cubi.bihealth.org  # OR hpc-transfer-2...
    # Of course, you can also log into the transfer nodes (no Slurm)
    ssh -l user_c hpc-transfer-1.cubi.bihealth.org  # OR hpc-transfer-2...
    ssh -l user_m hpc-transfer-1.cubi.bihealth.org  # OR hpc-transfer-2...
    

Getting Started

To get started, the following is a suggested (in order) set of pages to read after your first successful connection to the cluster.

  1. Getting Access.
  2. Getting Help (Writing Good Tickets; if no answer found, contact the HPC Helpdesk).
  3. For the Impatient.
  4. The Cluster Tutorial: First Steps.

Then, continue reading through the manual.

Acknowledging BIH HPC Usage

Acknowledge usage of the cluster in your manuscript as "Computation has been performed on the HPC for Research/Clinic cluster of the Berlin Institute of Health". Please add your publications using the cluster to this list.

Maintenance Announcements

Current and Future Maintenances

  • 🤓 August 30: Replace defect Nvidia V100 and make hpc-gpu-4 available to slurm users again.
  • 🤓 March 9: Updated TensorFlow How-To to Slurm and Tensorflow 2.0
  • ⚠ March 22-23: Compute and Storage Downtime.
  • ⚠ March: Deprecation of using DRMAA.
  • ✨ March 1: New scheduler settings to address high job per user count.
  • ⚠ February 14: Limiting allocateable memory per user.
  • 📖 February 3: Adding Ganglia documentation.
  • 🩹 February 3: Ganglia monitoring of GPFS and NVIDIA GPU metrics.
  • ⚠ January 31: Enforcing usage of localtmp resource above 100MB of node-local storage.

See Maintenance for a detailed list of current, planned, and previous maintenance and update work.

Connecting to the Cluster

You will need to perform some configuration steps after you have been registered with the cluster (via email to hpc-helpdesk@bih-charite.de by a group leader/PI). Here are the most important points:

  1. Generating SSH Keys 🔑 in Linux or Windows.
  2. Submitting the key ⬆ to Charite or to MDC.
  3. Configuring your SSH client 🔧 on Linux and Mac or Windows.
  4. Bonus: Connecting from external networks 🛸.

There are various other topics covered in the "Connecting" section that might be of interest to you.

Documentation Structure

The documentation is structured as follows:

  • Administrative information about administrative processes such as how to get access, register users, work groups, and projects.
  • Getting Help explains how you can obtain help in using the BIH HPC.
  • Overview detailed information about the cluster setup. This includes the description of the hardware, network, software, and policies.
  • Connecting technical help on connecting to the cluster.
  • First Steps information for getting you started quickly.
  • Slurm Scheduler technical help on using the Slurm scheduler.
  • Best Practice guidelines on recommended usage of certain aspects of the system.
  • Static Data documentation about the static data (files) collection on the cluster.
  • How-To short(ish) solutions for specific technical problems.
  • Miscellaneous contains a growing list of pages that don't fit anywhere else.

Last update: February 7, 2023