Navigation Menu

Skip to content


Repository files navigation

Siemens Simcenter STAR-CCM+ Runbook


Simcenter STAR-CCM+ is a complete multiphysics solution for the simulation of products and designs. This Runbook will take you through the process of deploying a Simcenter STAR-CCM+ cluster on Oracle Cloud with low latency networking between the compute nodes. Running Simcenter STAR-CCM+ on Oracle Cloud is quite straightforward, follow along this guide for all the tips and tricks.


  • Permission to manage the following types of resources in your Oracle Cloud Infrastructure tenancy: vcns, internet-gateways, route-tables, network-security-groups, subnets, and instances.

  • Quota to create the following resources: 1 VCN, 2 subnets, 1 Internet Gateway, 1 NAT Gateway, 1 Service Gateway, 3 route rules, and minimum 2 compute instances in instance pool or cluster network (plus bastion host).

If you don't have the required permissions and quota, contact your tenancy administrator. See Policy Reference, Service Limits, Compartment Quotas.

Deploy Using Oracle Resource Manager

  1. Click Deploy to Oracle Cloud

    If you aren't already signed in, when prompted, enter the tenancy and user credentials.

  2. Review and accept the terms and conditions.

  3. Select the region where you want to deploy the stack.

  4. Follow the on-screen prompts and instructions to create the stack.

  5. After creating the stack, click Terraform Actions, and select Plan.

  6. Wait for the job to be completed, and review the plan.

    To make any changes, return to the Stack Details page, click Edit Stack, and make the required changes. Then, run the Plan action again.

  7. If no further changes are necessary, return to the Stack Details page, click Terraform Actions, and select Apply.

Deploy Using the Terraform CLI

Clone the Module

Now, you'll want a local copy of this repo. You can make that with the commands:

git clone
cd oci-hpc-runbook-starccm

Set Up and Configure Terraform

  1. Complete the prerequisites described here.

  2. Create a terraform.tfvars file, and specify the following variables:

# Authentication
tenancy_ocid         = "<tenancy_ocid>"
user_ocid            = "<user_ocid>"
fingerprint          = "<finger_print>"
private_key_path     = "<pem_private_key_path>"

# Region
region = "<oci_region>"

# Availablity Domain 
ad = "<availablity doman>" # for example "GrCH:US-ASHBURN-AD-1"

# Bastion 
bastion_ad               = "<availablity doman>" # for example "GrCH:US-ASHBURN-AD-1"
bastion_boot_volume_size = "<bastion_boot_volume_size>" # for example 50
bastion_shape            = "<bastion_shape>" # for example "VM.Standard.E3.Flex"
boot_volume_size         = "<boot_volume_size>" # for example 100
node_count               = "<node_count>" # for example 2
ssh_key                  = "<ssh_key>"
targetCompartment        = "<targetCompartment>" 
use_custom_name          = false
use_existing_vcn         = false
use_marketplace_image    = true
use_standard_image       = true
cluster_network          = false
instance_pool_shape      = "<instance_pool_shape>" # for example VM.Standard.E3.Flex
starccm_binaries         = "<starccm_binaries>" # for example"
starccm_version          = "<starccm_version>" # for example "STAR-CCM+15.02.009_01_linux-x86_64-2.12_gnu7.1"

Create the Resources

Run the following commands:

terraform init
terraform plan
terraform apply

Destroy the Deployment

When you no longer need the deployment, you can run this command to destroy the resources:

terraform destroy


The architecture for this runbook is as follow, we have one small machine (bastion) that you will connect into. The compute nodes will be on a separate private network linked with RDMA RoCE v2 networking. The bastion will be accesible through SSH from anyone with the key (or VNC if you decide to enable it). Compute nodes will only be accessible through the bastion inside the network. This is made possible with 1 Virtual Cloud Network with 2 subnets, one public and one private.

The above baseline infrastructure provides the following specifications:

  • Networking
    • 1 x 100 Gbps RDMA over converged ethernet (ROCE) v2
    • Latency as low as 1.5 µs
  • HPC Compute Nodes (BM.HPC2.36)
    • 6.4 TB Local NVME SSD storage per node
    • 36 cores per node
    • 384 GB memory per node

Upload StarCCM+ binaries to Object Storage

  1. Log In

You can start by logging in the Oracle Cloud console. If this is the first time, instructions to do so are available here. Select the region in which you wish to create your Object Storage Bucket. Click on the current region in the top right dropdown list to select another one.

  1. Go to Buckets by clicking on and selecting Storage > Buckets

  2. Create a bucket by clicking . Give your bucket a name and select the storage tier and encryption.

  3. Once the bucket has been created, upload an object (binary) to the bucket by clicking Upload under Objects.

  4. Create a Pre-Authenitcated Request (PAR) using the following steps:

    • Click on for the object, then select

    • Select for the Pre-Authenticated Request Target and then select an access type.

    • Click

    • Be sure to copy the PAR URL by clicking before closing because you will NOT have access to the URL again.

  5. Add this PAR to the starccm_binaries variable.

Running STAR-CCM+

Install Intel MPI 2018 librairies

Run those commands on every node.

cd /nfs/cluster
sudo rpm --import GPG-PUB-KEY-INTEL-SW-PRODUCTS-2019.PUB
sudo yum-config-manager --add-repo=
sudo yum install -y intel-mpi-2018.4-057 intel-mpi-samples-2018.4-274

Running StarCCM+

  1. Navigate to Bastion - Find the public IP address of your remote host after the deployment job has finished:
Resource Manager

If you deployed your stack via Resource Manager, find the public IP address of the compute node at the bottom of the CLI console logs.

Command Line

If you deployed your stack via Command Line, find the public IP address of the compute node at the bottom of the console logs on the Logs page, or on the Outputs page.

  1. SSH into your bastion host
  1. SSH into cluster
ssh hpc-node-1
  1. Go to your model
cd /nfs/scratch/starccm/work
  1. To run, navigate to and run this script

Be sure to set execution permission before running the script.


chmod +x

Please make sure to update the script with your specific variables line 17 - line 28

Benchmark Example

Performances of STAR-CCM+ are often measured using the LeMans benchmark with 17 and 105 Millions cells. The next graphs are showing how using more nodes impact the runtime, with a scaling really close to 100%. RDMA network, which has not been discussed in this runbook, only start to differentiate versus regular TCP runs if the Cells / Core ratio starts to go down.

17 Millions Cells

105 Millions Cells