Skip navigation
NASA Logo, National Aeronautics and Space Administration
Currently Being Moderated

NU-WRF test cases.

VERSION 6  Click to view document history
Created on: Apr 12, 2018 2:22 PM by Carlos Cruz - Last Modified:  Apr 12, 2018 4:43 PM by Carlos Cruz

A recommended approach to get started running NU-WRF is to go through the tutorial ( and follow the step-by-step instructions. After the tutorial it may be more useful to look into the NU-WRF test cases. The latest NU-WRF modeling system is shipped with 40 test cases which can be used as a starting point to other specialized case studies.  The easiest way to tap into this small database of use cases is to use the regression testing scripts (*).


The  testing scripts are found in the scripts/python/regression subdirectory.  The scripts are a set of python scripts and configuration files designed to run tests on the NU-WRF code base. The run environment and tests description is specified in a text configuration file which serves as an argument to the main driver script, "reg", as follows:


$ reg my_test &


The script will output information to the screen describing, among other things,  the location of the results.


The easiest way to create your own test is to start with the sample configuration file "sample.cfg". The configuration file consists several key-value pairs spread out over 3 sections. The first section is the USERCONFIG section. Values that need to be modified are denoted by a <CHANGE_ME> string:




# The latest code is now stored in a git repository, so one can use:



# Enter your personal URL, i.e. path of your git clone:



# Branch name (default is master)



# Filesystem where we are doing all the work. If it does not exist, it will be created.



# Where we keep the regression scripts. Default <repository>/scripts/regression



# Compilation type (release, debug (-O0 -g), traps)



# Where to mail tests report



# The following options are not commonly changed:

# Use SLURM batch system on DISCOVER (NASA only)

# If set to 'no', script (and commands therein) will run interactively.

# Such option is preferred when running in debug mode (dry runs).



# sponsor ID required by SLURM



# If we are using "modules" to load compilers then set to "yes"

# NOTE: If modules=yes then specify/use modulelist in COMPCONFIG section

# If set to 'no', scripts will use compilers available in the system.



# In case we update to CMake.... for now makeOld refers to GNU make.



# Test report message (One sentence, no quotes)

message=Regression testing of NU-WRF code base



# Clean the regression testing scratch space (scratch_dir above)




# Data path for the test cases. Default <project_dir>/regression_testing/data

# where <project_dir> = /discover/nobackup/projects/nu-wrf

# Do not change unless you have the data needed to run a particular test case.



# Location of NU-WRF output baseline files.

# Do not change unless you have baseline files in that location.

# You can create your own baseline directory to test your own code.



# Update baseline_dir with new model answers (change to yes when your code is ready and if you have permission to update)



# Location of third-party libraries used by NU-WRF

# Variations: intel-sgimpt-bjerknes-p5 , intel-sgimpt-charney-p1, intel-sgimpt (used for Charney)

# If using legacy code (pre-bjerknes-p5) set:

# nuwrflib_dir=/discover/nobackup/projects/nu-wrf/lib/SLES11.3/nuwrflib-8r2/intel



# LIS project directory



# To use a NU-WRF existing build, define model_dir below.

# If nothing specified then new builds will be created.



# If Executables exist in a separate installation set the following variable:



# If we want to recreate (setup) a testcase without running the components:

# 1) You have a NU-WRF build (i.e. model_dir/exe_dir are specified).

# 2) You want to 'manually' run individual NU-WRF components

# Then set setup_runs=yes



The second section describes your test.






expected_output=wrfout_d01_2009-04-10_12:00:00,wrfout_d01_2009-04-10_15:00:00,wr fout_d01_2009-04-10_18:00:00,wrfout_d01_2009-04-10_21:00:00,wrfout_d01_2009-04-1 1_00:00:00,wrfout_d01_2009-04-11_03:00:00,wrfout_d01_2009-04-11_06:00:00,wrfout_ d01_2009-04-11_09:00:00,wrfout_d01_2009-04-11_12:00:00,wrfout_d02_2009-04-10_12: 00:00,wrfout_d02_2009-04-10_15:00:00,wrfout_d02_2009-04-10_18:00:00,wrfout_d02_2 009-04-10_21:00:00,wrfout_d02_2009-04-11_00:00:00,wrfout_d02_2009-04-11_03:00:00 ,wrfout_d02_2009-04-11_06:00:00,wrfout_d02_2009-04-11_09:00:00,wrfout_d02_2009-0 4-11_12:00:00



Explanation of entries:



The name of the test is enclosed in brackets: [wrf_3iceg_2014rad]

     The name, wrf_3iceg_2014rad, represents the build type + the run type

     The first substring (before the first _) refers to the build type, i.e. wrf

     There are only 4 build types in NU-WRF: wrf, wrflis, chem, kpp

     The rest of the string, 3iceg_2014rad, is the run type (3iceg and 2014 radiation scheme)



Each test will be executed using the computational environment specified in compilers.

     Currently there are only two working environments: intel-sgimpt and gnu-mvapich2.

     The computational environmnets are separately specified in the comp.cfg

     file (also in this directory). To change the computational environment copy

     the comp.cfg file into a user defined cfg file and edit the settings there.



Each test consists of running a set of specified components: geogrid,ungrib,metgrid,real,wrf



The list of numbers following npes= represents the number of CPUs used to run the

corresponding component. So, for each component there must be a specified npes entry.



The run contains an expected_output field. In this case after the wrf run is finished

the reg script will attempt to compare the run wrf output with the baseline files

stored in baseline_dir. If expected_output is missing, the comparison will be skipped.

If the run output differs from the expected_output the final report will report an error.



By default each test will be compiled and run as specified. However the "verification"

method can be changed to "compile_only". In that case the npes and component entries

are not needed.



All the available NU-WRF test cases are specified in the "master.cfg" file.



The third section is called [COMPCONFIG] and it describes the computational environment used by the tests. This configuration is in a separate file, comp.cfg, because its settings are not expected to change for a particular NU-WRF release:




# What compiler vendors we support. Current choices are: intel,gnu


# Specify version for each compiler: One-to-one with 'compilers' list above.

# This information is *only* used in the final report.



# Specify names of module lists here. Note that the names in this list

# must correspond to the ones referenced below.


# What each modulelist entry contains. Given names correspond to actual

# module names on DISCOVER (or system being used).

intel-sgimpt=other/comp/gcc-6.3,comp/intel-,lib/mkl-,mpi/sgi -mpt-2.16,other/SSSO_Ana-PyD/SApd_4.2.0_py3.5

gnu-openmpi=other/comp/gcc-4.9.2-sp3,other/mpi/openmpi/1.8.4-gcc-4.9.2-sp3,other /SSSO_Ana-PyD/SApd_2.4.0_py2.7

#gnu-openmpi=other/comp/gcc-5.2-sp3,other/mpi/openmpi/1.8.7-gcc-5.2-sp3,other/SS SO_Ana-PyD/SApd_2.4.0_py2.7

gnu-sgimpt=other/comp/gcc-5.3-sp3,mpi/sgi-mpt-2.12,other/SSSO_Ana-PyD/SApd_2.4.0 _py2.7

gnu-mvapich2=other/comp/gcc-4.9.2-sp3,other/mpi/mvapich2-2.1/gcc-4.9.2-sp3,other /SSSO_Ana-PyD/SApd_2.4.0_py2.7


However, its settings can be overridden by including all its contents in the user-defined configuration file. For example if the user wishes to try a newer compiler version he/she would need to

1) Add a new module list name in "modulelist"

2) Specify the system-specific module names corresponding to (1)







Of course the NU-WRF 3rd party libraries will have to be built with the same computational environment.  So, this approach requires a lot of work and is not recommended.



(*)The NU-WRF regression testing scripts are routinely used to perform software verification of the modeling system. For a system like NU-WRF one would require a large number of tests to attain high test coverage. A large number of tests has a lower chance of containing undetected bugs compared to a program with low test coverage. Our 40 tests constitute a limited but representative sample while being practical.





Comments (0)
USAGov logo NASA Logo -