Skip navigation
NASA Logo, National Aeronautics and Space Administration
Currently Being Moderated

GEOS-5 DAS quick start guide.

VERSION 7  Click to view document history
Created on: Apr 7, 2008 10:04 AM by Carlos Cruz - Last Modified:  Apr 13, 2010 1:08 PM by Carlos Cruz

NOTE: This instructions have been (temporarily) superseded by


This quick start guide describes a particular set up to run the GEOS-5 DAS on the DISCOVER NCCS system.


First define the location of BASEDIR for your environment:


setenv BASEDIR /usr/local/other/baselibs/ESMF220rp2_NetCDF362b6_9.1.052




setenv GEOSUTIL $NOBACKUP/GEOSadas/src/GMAO_Shared/GEOS_Util


Note: $NOBACKUP is a global environment variable defined on NCCS systems.


cvs -d co -r GEOSdas-2_1_4 GEOSadas


Note usage of GEOSdas-2_1_4 tag (for more information see GEOS-5 DAS revision history.).


cd GEOSadas/src/ 
gmake install > & install.log & (Csh)
gmake install & > install.log & (Bash)
(or you can run parallel_build.csh)


Note: a successful installation will create (among others) GEOSgcm.x and gsi.x executables on $NOBACKUP/GEOSadas/Linux/bin. Now go to $NOBACKUP/GEOSadas/Linux/bin and execute the fvsetup script.


fvsetup is the DAS experiment setup utility. It is an interactive perl script that allows you to set up an experiment configuration. Unfortunately it is quite unforgiving so that if you make a mistake (such as a typo or an unintended

keystroke) you are not allowed to "go back" and have only two choices: restart the setup or edit the configurations.

Since fvsetup requires a moderate amount of knowledge about the DAS settings the best a novice can do is to try to use the default settings which may or may not work.


Here we describe a set of particular options that allow us to run a DAS MERRA simulation at 1/2 degree model resolution. Generally the biggest difficulty for a novice is to select an appropriate (working) set of initial conditions. During this particular fvsetup session only the following settings need to be changed (use the default for all others):


  Resolution? [b72] d72

  FVICS? /archive/merra/dao_ops/production/GEOSdas-2_1_4/d5_merra_jan98

  Starting year-month-day? [20000929] 19980205

  Ending year-month-day? [20001006] 19980206
  Ending hour-min-sec? [090000] 210000
  Length (in days) of FORECAST run segments? [5] 1

  Frequency for PROGNOSTIC fields? [030000] 120000
  Frequency for surface (2D) DIAGNOSTIC fields? [030000] 120000
  Frequency for upper air (3D) DIAGNOSTIC fields? [030000] 120000

  Which template? HISTORY_MERRA_CFIO.rc.tmpl

  Would you like 2D diagnostics? [y] n
  Would you like 3D diagnostics? [y] n


These are the settings for d72 (for information on resolution codes see Description of resolution codes).


Note that in your FVHOME directory (which should be $NOBACKUP/u000_d72) there will be a directory structure as follows:


drwxr-xr-x  ana/
drwxr-xr-x  daotovs/
drwxr-xr-x  diag/
drwxr-xr-x  etc/
drwxr-xr-x  fcst/
drwxr-xr-x  fvInput/
drwxr-xr-x  obs/
drwxr-xr-x  prog/
drwxr-xr-x  recycle/
drwxr-xr-x  rs/
drwxr-xr-x  run/


The above setup will run a GEOS-5 DAS simulation for two days and will output PROGNOSTIC and DIAGNOSTIC data every 12 hours.


This will launch 64 processes with 4 processes per node. To see the status of your job, issue the following command:


u000_d72/run> qstat -a


When the simulation is done the results (restarts, analysis files, etc) will be archived to ($ARCHIVE/u000_d72) where you should see something like this:


drwxr-xr-x  ana/
drwxr-xr-x  diag/
drwxr-xr-x  etc/
drwxr-xr-x  fcst/
drwxr-xr-x  obs/
drwxr-xr-x  rs/
drwxr-xr-x  run/


If unsuccessful you will see a "morgue" directory in your FVHOME containing an image of the work directory (FVWORK) that contained the simulation.  Note that "ls -la" under FVHOME reveals two text files, .FVWORK and .FVROOT, that contain the values of the FVWORK and FVROOT environment variables.



Monitoring currently running jobs (stderr/stdout) on DISCOVER.


/discover/pbs_spool is a 200 GB GPFS filesystem that is a globally visible spool dir. The local spool directory on all compute nodes is now a sym-link that point to this global spool dir. You should be able to monitor job err/output by going to this directory and finding the appropriate files by their jobids. As with the SGIs, users should not edit or remove any files in this directory or unpredictable things may happen. The intermediate output files have names such as <job-number>.<node-of-submission>.OU, for example:


userid@discover01:/discover/pbs_spool> ls
1008.borgmg.OU  1224.borgmg.OU  1249.borgmg.OU  
1390.borgmg.OU  1628.borgmg.OU
1036.borgmg.OU  1225.borgmg.OU  1256.borgmg.OU  
1396.borgmg.OU  1705.borgmg.OU



Please note: this filesystem is not set up for I/O performance or for handling large stderr/stdout files. It is expected that small amounts of text-only output will be written here (and moved back to submission directories at the conclusion of a job. If users have large text I/O requirements, they should be writing directly to a file on and not using stdout.


Any non-PBS files that show up in this directory are subject to deletion at any time and without warning. This filesystem is for PBS spool use only.


If PBS cannot place a stderr/stdout file where it thinks it should go, then it will place the file in

Comments (4)

Bookmarked By (0)

More Like This

  • Retrieving data ...
USAGov logo NASA Logo -