CESM2.1.4 Machine Definitions
Model Version: 2.1.4
Change CESM Version
HTML Created:
2024-10-02
Support Levels
Tested - Indicates that the machine has passed funtional CESM regression testing but may not necessarily by appropriate for large coupled experiments due to machine restrictions; for example, limited number of nodes and disk space.
Scientific - Indicates that the machine has passed statistical validation and CESM regression testing and may be used for large coupled experiments.
Related Links
Reference: CIME Defining the machine Documentation
Name | OS | Compilers | pes/node | max_tasks/node | Support Level | Details |
---|---|---|---|---|---|---|
aleph | CNL | intel,gnu,cray | 40 | 40 | Unsupported | XC50 SkyLake, os is CNL, 40 pes/node, batch system is PBSPro |
arc4 | LINUX | intel | 40 | 40 | Unsupported | A port of CEM to the Leeds ARC4 machine, batch system is sge. CEMAC. |
archer2 | CNL | gnu,cray | 128 | 128 | Unsupported | two CrayAMD EPYC Zen2, 128 pes/node, batch system is SLURM |
athena | LINUX | intel,intel15 | 30 | 15 | Unsupported | CMCC IBM iDataPlex, os is Linux, 16 pes/node, batch system is LSFd mpich |
aws-hpc6a | LINUX | intel | 96 | 96 | Unsupported | AWS HPC6a (96-core AMD) Nodes |
bluewaters | CNL | intel,pgi,cray,gnu | 32 | 16 | Tested | ORNL XE6, os is CNL, 32 pes/node, batch system is PBS |
centos7-linux | LINUX | gnu | 8 | 8 | Unsupported | Example port to centos7 linux system with gcc, netcdf, pnetcdf and mpich using modules from http://www.admin-magazine.com/HPC/Articles/Environment-Modules |
cheyenne | LINUX | intel,gnu,pgi | 36 | 36 | Scientific | NCAR SGI platform, os is Linux, 36 pes/node, batch system is PBS |
comet | LINUX | intel | 24 | 24 | Unsupported | Comet is a dedicated eXtreme Science and Engineering Discovery Environment (XSEDE) cluster designed by Dell and SDSC delivering 2.76 peak petaflops. It features Intel next-gen processors with AVX2, Mellanox FDR InfiniBand interconnects, and Aeon storage. https://www.sdsc.edu/support/user_guides/comet.html |
constance | LINUX | intel,pgi | 24 | 24 | Unsupported | PNL Haswell cluster, OS is Linux, batch system is SLURM |
daint | CNL | pgi,cray,gnu | 12 | 12 | Unsupported | CSCS Cray XC50, os is SUSE SLES, 12 pes/node, batch system is SLURM |
derecho | CNL | intel | 128 | 128 | Unsupported | NCAR AMD EPYC |
eastwind | LINUX | pgi,intel | 12 | 12 | Unsupported | PNL IBM Xeon cluster, os is Linux (pgi), batch system is SLURM |
euler2 | LINUX | intel,pgi | 24 | 24 | Unsupported | Euler II Linux Cluster ETH, 24 pes/node, InfiniBand, XeonE5_2680v3, batch system LSF |
euler3 | LINUX | intel,pgi | 4 | 4 | Unsupported | Euler III Linux Cluster ETH, 4 pes/node, Ethernet, XeonE3_1585Lv5, batch system LSF |
euler4 | LINUX | intel,pgi | 36 | 36 | Unsupported | Euler IV Linux Cluster ETH, 36 pes/node, InfiniBand, XeonGold_6150, batch system LSF |
gaea | CNL | pgi | 24 | 24 | Unsupported | NOAA XE6, os is CNL, 24 pes/node, batch system is PBS |
grace | LINUX | intel | 96 | 48 | Unsupported | Intel Xeon 6248R 3.0 GHz ("Cascade Lake"),48 cores on two sockets (24 cores/socket) , batch system is SLURM |
gust | CNL | intel | 128 | 128 | Unsupported | NCAR AMD EPYC test system 16 CPU nodes 2 GPU nodes |
hobart | LINUX | intel,pgi,nag,gnu | 48 | 48 | Scientific | NCAR CGD Linux Cluster 48 pes/node, batch system is PBS |
homebrew | Darwin | gnu | 8 | 4 | Unsupported | Customize these fields as appropriate for your system, particularly changing MAX_TASKS_PER_NODE and MAX_MPITASKS_PER_NODE to the number of cores on your machine. You may also want to change instances of '$ENV{HOME}/projects' to your desired directory organization. You can use this in either of two ways: (1) Without making any changes, by adding `--machine homebrew` to create_newcase or create_test (2) Copying this into a config_machines.xml file in your personal .cime directory and then changing the machine name (MACH="homebrew") to your machine name and the NODENAME_REGEX to something matching your machine's hostname. With (2), you should not need the `--machine` argument, because the machine should be determined automatically. However, with (2), you will also need to copy the homebrew-specific settings in config_compilers.xml into a config_compilers.xml file in your personal .cime directory, again changing the machine name (MACH="homebrew") to your machine name. |
izumi | LINUX | intel,pgi,nag,gnu | 48 | 48 | Unsupported | NCAR CGD Linux Cluster 48 pes/node, batch system is PBS |
laramie | LINUX | intel,gnu | 36 | 36 | Unsupported | NCAR SGI test platform, os is Linux, 36 pes/node, batch system is PBS |
lawrencium-lr2 | LINUX | intel | 12 | 12 | Unsupported | Lawrencium LR2 cluster at LBL, OS is Linux (intel), batch system is SLURM |
lawrencium-lr3 | LINUX | intel | 16 | 16 | Unsupported | Lawrencium LR3 cluster at LBL, OS is Linux (intel), batch system is SLURM |
lonestar5 | LINUX | intel | 48 | 24 | Unsupported | Lonestar5 cluster at TACC, OS is Linux (intel), batch system is SLURM |
melvin | LINUX | gnu | 64 | 64 | Unsupported | Linux workstation for Jenkins testing |
mira | BGQ | ibm | 64 | 8 | Unsupported | ANL IBM BG/Q, os is BGP, 16 pes/node, batch system is cobalt |
modex | LINUX | gnu | 12 | 12 | Unsupported | Medium sized linux cluster at BNL, torque scheduler. |
olympus | LINUX | pgi | 32 | 32 | Unsupported | PNL cluster, os is Linux (pgi), batch system is SLURM |
perlmutter | Linux | intel | 256 | 128 | Unsupported | Perlmutter CPU-only nodes at NERSC. Phase2 only: Each node has 2 AMD EPYC 7713 64-Core (Milan) 512GB batch system is Slurm |
pleiades-bro | LINUX | intel | 28 | 28 | Unsupported | NASA/AMES Linux Cluster, Linux (ia64), 2.4 GHz Broadwell Intel Xeon E5-2680v4 processors, 28 pes/node (two 14-core processors) and 128 GB of memory/node, batch system is PBS |
pleiades-has | LINUX | intel | 24 | 24 | Unsupported | NASA/AMES Linux Cluster, Linux (ia64), 2.5 GHz Haswell Intel Xeon E5-2680v3 processors, 24 pes/node (two 12-core processors) and 128 GB of memory/node, batch system is PBS |
pleiades-ivy | LINUX | intel | 20 | 20 | Unsupported | NASA/AMES Linux Cluster, Linux (ia64), Altix ICE, 2.8 GHz Ivy Bridge processors, 20 cores/node and 3.2 GB of memory per core, batch system is PBS |
pleiades-san | LINUX | intel | 16 | 16 | Unsupported | NASA/AMES Linux Cluster, Linux (ia64), Altix ICE, 2.6 GHz Sandy Bridge processors, 16 cores/node and 32 GB of memory, batch system is PBS |
sandia-srn-sems | LINUX | gnu | 64 | 64 | Unsupported | Linux workstation at Sandia on SRN with SEMS TPL modules |
sandiatoss3 | LINUX | intel | 16 | 16 | Unsupported | SNL clust |
stampede2-knl | LINUX | intel | 256 | 64 | Unsupported | Intel Xeon Phi 7250 ("Knights Landing") , batch system is SLURM |
stampede2-skx | LINUX | intel | 96 | 48 | Unsupported | Intel Xeon Platinum 8160 ("Skylake"),48 cores on two sockets (24 cores/socket) , batch system is SLURM |
theia | LINUX | intel | 24 | 24 | Unsupported | theia |
theta | CNL | intel,gnu,cray | 64 | 64 | Tested | ALCF Cray XC* KNL, os is CNL, 64 pes/node, batch system is cobalt |
ubuntu-latest | LINUX | gnu | 4 | 4 | Unsupported | used for github testing |
zeus | LINUX | intel | 72 | 36 | Unsupported | CMCC Lenovo ThinkSystem SD530, os is Linux, 36 pes/node, batch system is LSF |
Name | OS | Compilers | pes/node | max_tasks/node | Support Level | Details |