High Performance Computing Options

With several different High Performance Computing (HPC) options available for researchers at the University, it can be difficult to understand which HPC option best fits a particular scenario.

The following HPC options outline some key criteria that can help you decide which specific compute option is best suited to meet your research compute needs.

Please note, research data is considered a University of ÐÂÀË²ÊÆ± information asset and should be stored in university approved storage. For further information please refer to the Data Retention and Preservation web page and the Information Management Policy. If you have questions, please contact Records Services.

  • Phoenix - University owned HPC

    For high volume compute, Phoenix is the preferred use option within the University. The use of the existing Phoenix infrastructure is free for University related research.

    Phoenix is suitable when jobs:

    • Will benefit from parallelization and are consistent in intensity in terms of CPU usage over time. Phoenix provides from 32 to 40 cores per node with up to a total of approximately 10800 cores available.
    • Are GPU-accelerated and Machine learning type jobs. Phoenix provides 288 Nvidia K80 and 16 Volta V100 GPUs. The Australian Institute of Machine Learning (AIML) can also benefit from the 48 Nvidia V100 from the Volta logical cluster.
    • Need to process large datasets as Phoenix uses a 2PB high performing Lustre file system that is shared across all nodes.
    • Are small to mid-size Input/Output (i.e. I/O) intensive, as each node on Phoenix offers from 350GB to 500GB of local disk.

    Phoenix is not suitable when jobs:

    • Require a graphical user interface.
    • Need interaction while running.
    • Are large I/O intensive jobs dealing with a very large number of small sized files (e.g over 100,000 files of e.g. 1MB size). These types of jobs cause congestion and impact all jobs sharing the Lustre file system leading to an overall significant performance degradation of Phoenix.
    • Are non-parallelizable and span a very large amount of time (i.e. more than 7 days).
    • Exceed the researcher's allocated fraction of the cluster as determined by the portion of co-investment provided by their group or faculty. These jobs will have difficulty in reserving the required resources, resulting in long wait times for the job to be executed.
    • Are urgent, as queue wait times vary.

    As Phoenix is a shared University of ÐÂÀË²ÊÆ± resource, the job priority can be reset and lowered resulting in increasing wait times. Phoenix is not recommended for small jobs that do not need the performance associated with HPC resources or if the project demands are too high as it means Phoenix usage will be inconsistent or usage over several months will be spiky.

    Visit the Phoenix HPC page for more information.

  • Server Hosting

    Virtual Machines can be hosted on premise using our VMware infrastructure. Log a ticket via the to discuss this option further.

    Visit the Server Hosting page for more information. Please note, VMs can also be provisioned in RONIN – visit the RONIN page for more information.

  • RONIN – Configure and access Amazon Web Services (AWS)

    RONIN is a user-friendly web based management dashboard that allows  researchers  to leverage  the  full  complexity and power of AWS cloud based computing, without facing a steep learning curve. While RONIN provides additional flexibility for different research scenarios, our onsite Phoenix HPC environment remains the most cost effective option and is the preferred HPC option where possible.

    RONIN and AWS is suitable for tasks:

    • When you have Grant or faculty money to pay for your research compute. Please note, you may be eligible for a quota of up to US$330 per project per year. If your compute requirements exceed that amount, you will need to pay for the difference.