CESM2.1.3 Machine Definitions
Model Version: 2.1.3
Change CESM Version
HTML Created:
2020-02-13
Support Levels
Tested - Indicates that the machine has passed funtional CESM regression testing but may not necessarily by appropriate for large coupled experiments due to machine restrictions; for example, limited number of nodes and disk space.
Scientific - Indicates that the machine has passed statistical validation and CESM regression testing and may be used for large coupled experiments.
Related Links
Reference: CIME Defining the machine Documentation
Name | OS | Compilers | pes/node | max_tasks/node | Support Level | Details |
---|---|---|---|---|---|---|
aleph | CNL | intel,gnu,cray | 40 | 40 | Unsupported | XC50 SkyLake, os is CNL, 40 pes/node, batch system is PBSPro |
athena | LINUX | intel,intel15 | 30 | 15 | Unsupported | CMCC IBM iDataPlex, os is Linux, 16 pes/node, batch system is LSFd mpich |
bluewaters | CNL | intel,pgi,cray,gnu | 32 | 16 | Unsupported | ORNL XE6, os is CNL, 32 pes/node, batch system is PBS |
centos7-linux | LINUX | gnu | 8 | 8 | Unsupported | Example port to centos7 linux system with gcc, netcdf, pnetcdf and mpich using modules from http://www.admin-magazine.com/HPC/Articles/Environment-Modules |
cheyenne | LINUX | intel,gnu,pgi | 36 | 36 | Scientific | NCAR SGI platform, os is Linux, 36 pes/node, batch system is PBS |
constance | LINUX | intel,pgi | 24 | 24 | Unsupported | PNL Haswell cluster, OS is Linux, batch system is SLURM |
cori-haswell | CNL | intel,gnu,cray | 64 | 32 | Unsupported | NERSC XC40 Haswell, os is CNL, 32 pes/node, batch system is Slurm |
cori-knl | CNL | intel,gnu,cray | 256 | 64 | Unsupported | NERSC XC* KNL, os is CNL, 68 pes/node, batch system is Slurm |
daint | CNL | pgi,cray,gnu | 12 | 12 | Unsupported | CSCS Cray XC50, os is SUSE SLES, 12 pes/node, batch system is SLURM |
eastwind | LINUX | pgi,intel | 12 | 12 | Unsupported | PNL IBM Xeon cluster, os is Linux (pgi), batch system is SLURM |
edison | CNL | intel,gnu,cray | 48 | 24 | Unsupported | NERSC XC30, os is CNL, 24 pes/node, batch system is SLURM |
euler2 | LINUX | intel,pgi | 24 | 24 | Unsupported | Euler II Linux Cluster ETH, 24 pes/node, InfiniBand, XeonE5_2680v3, batch system LSF |
euler3 | LINUX | intel,pgi | 4 | 4 | Unsupported | Euler III Linux Cluster ETH, 4 pes/node, Ethernet, XeonE3_1585Lv5, batch system LSF |
euler4 | LINUX | intel,pgi | 36 | 36 | Unsupported | Euler IV Linux Cluster ETH, 36 pes/node, InfiniBand, XeonGold_6150, batch system LSF |
gaea | CNL | pgi | 24 | 24 | Unsupported | NOAA XE6, os is CNL, 24 pes/node, batch system is PBS |
hobart | LINUX | intel,pgi,nag,gnu | 48 | 48 | Scientific | NCAR CGD Linux Cluster 48 pes/node, batch system is PBS |
homebrew | Darwin | gnu | 8 | 4 | Unsupported | Customize these fields as appropriate for your system, particularly changing MAX_TASKS_PER_NODE and MAX_MPITASKS_PER_NODE to the number of cores on your machine. You may also want to change instances of '$ENV{HOME}/projects' to your desired directory organization. You can use this in either of two ways: (1) Without making any changes, by adding `--machine homebrew` to create_newcase or create_test (2) Copying this into a config_machines.xml file in your personal .cime directory and then changing the machine name (MACH="homebrew") to your machine name and the NODENAME_REGEX to something matching your machine's hostname. With (2), you should not need the `--machine` argument, because the machine should be determined automatically. However, with (2), you will also need to copy the homebrew-specific settings in config_compilers.xml into a config_compilers.xml file in your personal .cime directory, again changing the machine name (MACH="homebrew") to your machine name. |
izumi | LINUX | intel,pgi,nag,gnu | 48 | 48 | Scientific | NCAR CGD Linux Cluster 48 pes/node, batch system is PBS |
laramie | LINUX | intel,gnu | 36 | 36 | Unsupported | NCAR SGI test platform, os is Linux, 36 pes/node, batch system is PBS |
lawrencium-lr2 | LINUX | intel | 12 | 12 | Unsupported | Lawrencium LR2 cluster at LBL, OS is Linux (intel), batch system is SLURM |
lawrencium-lr3 | LINUX | intel | 16 | 16 | Unsupported | Lawrencium LR3 cluster at LBL, OS is Linux (intel), batch system is SLURM |
lonestar5 | LINUX | intel | 48 | 24 | Unsupported | Lonestar5 cluster at TACC, OS is Linux (intel), batch system is SLURM |
melvin | LINUX | gnu | 64 | 64 | Unsupported | Linux workstation for Jenkins testing |
mira | BGQ | ibm | 64 | 8 | Unsupported | ANL IBM BG/Q, os is BGP, 16 pes/node, batch system is cobalt |
modex | LINUX | gnu | 12 | 12 | Unsupported | Medium sized linux cluster at BNL, torque scheduler. |
olympus | LINUX | pgi | 32 | 32 | Unsupported | PNL cluster, os is Linux (pgi), batch system is SLURM |
pleiades-bro | LINUX | intel | 28 | 28 | Unsupported | NASA/AMES Linux Cluster, Linux (ia64), 2.4 GHz Broadwell Intel Xeon E5-2680v4 processors, 28 pes/node (two 14-core processors) and 128 GB of memory/node, batch system is PBS |
pleiades-has | LINUX | intel | 24 | 24 | Unsupported | NASA/AMES Linux Cluster, Linux (ia64), 2.5 GHz Haswell Intel Xeon E5-2680v3 processors, 24 pes/node (two 12-core processors) and 128 GB of memory/node, batch system is PBS |
pleiades-ivy | LINUX | intel | 20 | 20 | Unsupported | NASA/AMES Linux Cluster, Linux (ia64), Altix ICE, 2.8 GHz Ivy Bridge processors, 20 cores/node and 3.2 GB of memory per core, batch system is PBS |
pleiades-san | LINUX | intel | 16 | 16 | Unsupported | NASA/AMES Linux Cluster, Linux (ia64), Altix ICE, 2.6 GHz Sandy Bridge processors, 16 cores/node and 32 GB of memory, batch system is PBS |
sandia-srn-sems | LINUX | gnu | 64 | 64 | Unsupported | Linux workstation at Sandia on SRN with SEMS TPL modules |
sandiatoss3 | LINUX | intel | 16 | 16 | Unsupported | SNL clust |
stampede2-knl | LINUX | intel | 256 | 64 | Unsupported | Intel Xeon Phi 7250 ("Knights Landing") , batch system is SLURM |
stampede2-skx | LINUX | intel | 96 | 48 | Unsupported | Intel Xeon Platinum 8160 ("Skylake"),48 cores on two sockets (24 cores/socket) , batch system is SLURM |
theia | LINUX | intel | 24 | 24 | Unsupported | theia |
theta | CNL | intel,gnu,cray | 64 | 64 | Unsupported | ALCF Cray XC* KNL, os is CNL, 64 pes/node, batch system is cobalt |
Name | OS | Compilers | pes/node | max_tasks/node | Support Level | Details |