Skip navigation
NASA Logo, National Aeronautics and Space Administration
1 2 Previous Next

Cubed Sphere Migration

24 Posts
0

Code scoping

Posted by dgueyffi Aug 4, 2008

Module

Summary

Link to detailed comments on source code

Radiation

in RAD_DRV.f

  • Some important quantities like COSZ and COSZA are functions defined
    everywhere which are sampled at cell centers (I,J). All other important
    quantities like GTEMP depend only on local cell (I,J) column
    information.

 

  • No special treatment for cube vertices because quantities are
    sampled at cell centers (we thus avoid the singularities at cube
    corners)

 

  • Most changes are related to adding a dependence on (I,J) instead of
    only J. We will rely on array definitions made in GEOM_B.f (like
    SINIP(I,J) or DXYP(I,J)). Some changes are necessary because Latitude =
    cste doesn't correspond to J=cste anymore.

 

  • It would be most useful to use an object for the grid. Bill, Tom,
    are f95/f2003 objects compatible with MPI&domain decomposition?

 

  • Regridding will be used to check that quantities are globally
    conserved on the cubic sphere. In the present case we must check that
    \int_ COSZ dA = 1/4

 

Scoping of rad_drv.f

Clouds

In CLOUDS_DRV.f

  • Changes are mostly in diagnostics. We will make heavy use of new zigzag zonal means

  • we will also need to 'gather' regional mean diagnostics

  • we will need new 2D domain decomposition routines GET, HALO_UPDATES, BURN_RANDOM, HALO_UPDATES_COLUMN

  • 1D domain decomp -> 2D domain decomp so bounds of do loops like DO I = 1,IM become DO I=I_0,I_1

  • Some arrays do not have I,J in first position. Domain decomp functions will need to accomodate this

  • use of 4 neighboring cells for the wind KMAX(J) = 4 everywhere

 

               CLOUDS.f is unchanged as it is 100% column physics

 

               In CLOUDS_COM.f

  • 1D-> 2D domain decomposition, change allocation

  • we need implentations of PACK_COLUMN, UNPACK_COLUMN

 

scoping of cloud routines

Sea Ice

In SEAICE_DRV.f 

 

  • need to transition from 1D domain decomp to 2D domain decomposition

  • Several Global diagnostics. We may want to use a MPI reduction

  • Zonal mean diagnostics, calls to zigzag functions

 

Scoping sea ice routines

0

Scoping sea ice routines

Posted by dgueyffi Aug 4, 2008

Seaice_drv.f:

 

 

 

  • need to transition from 1D domain decomp to 2D domain decomposition

  • Several Global diagnostics. We may want to use a MPI reduction

  • zonal mean diagnostics, calls to zigzag functions

 

1

 

The tgz file attached contains the IDL code to generate an image of the cubed sphere grid.  Untar it where you would like to run the IDL code. Running is simple, once IDL is started enter:

 

 

grid_gen, 32

 

 

will produce a PNG image of the c32 grid

 

 

 

 

 

grid_gen, 32, /SHIFT_WEST

 

 

will produce the same PNG image but rotating the corner points to avoid placement of one corner over steep terrain in Japan, this sample image is attached.

 

 

You can generate an image for any c#### grid #### is any even number.  The vertices are computed within the IDL code. 

 

 

0

scoping of cloud routines

Posted by dgueyffi Jul 29, 2008

See attached file (need acrobat reader to read the comments and sticky notes)

 

In CLOUDS_DRV.f:

  • Changes are mostly in diagnostics. We will make heavy use of new zigzag zonal means

  • we will also need to 'gather' regional mean diagnostics

  • we will need new 2D domain decomposition routines GET, HALO_UPDATES, BURN_RANDOM, HALO_UPDATES_COLUMN

  • 1D domain decomp -> 2D domain decomp so bounds of do loops like DO I = 1,IM become DO I=I_0,I_1

  • Some arrays do not have I,J in first position. Domain decomp functions will need to accomodate this

  • use of 4 neighboring cells for the wind KMAX(J) = 4 everywhere

 

CLOUDS.f is unchanged as it is 100% column physics

 

In CLOUDS_COM.f:

  • 1D -> 2D domain decomposition, change allocation

  • we need implentations of PACK_COLUMN, UNPACK_COLUMN

2

Parallel function

Defined in

Called from

what's needed?

HALO_UPDATE()

DOMAIN_DECOMP.f

CLOUDS_DRV,...

  • needs to understand the new geometry

  • handles 2D domains instead of 1D

GET()

same as above

RAD_DRV, CLOUDS_DRV, ...

same as above

GLOBALSUM()

same as above

CLOUDS_DRV, ...

same as above

PACK_COLUMN()

same as above

CLOUDS_COM

  • wraps gather routines implemented by Bill

UNPACK_COLUMN()

same as above

CLOUDS_COM

  • wraps scatter routines implemented by Bill

BURN_RANDOM()

SYSTEM.f

RAD_DRV

should pass as parameter a global index function of (I,J,face_index)

0

Domain decomposition

  • Maharaj Bhat at SIVO has begun working on domain decomposition. We proposed that he begins working on the the following:

  • Bill has gather routines which should be generalized to arrays with different ordering of indices. These routines

will be used for diagnostics, zonal means and input/ouput.

  • We will need implementations for GET, HALO_UPDATE, GLOBALSUM, BURN_RANDOM and other routines in domain_decomp.f

that wrap MPP_* routines

  • MAPL has gather&scatter capabilities, but we believe coding input/output routines would be costly

  • Max has implemented a version of mpp_update_domain for different array shapes

 

Regridding routines

  • we have found that the regridding routines possibly contain an error in the calculation of the area of the exchange grid.

  • We are replacing the approximate formulas for the calculation of the area of a polygon by an exact formula.

  • We found a small error, lower than 1.e-5  between the area of a 1 degree latitude band on the exchange grid and he exact formula

  • But we found that the area of the 4 cells around the North Pole are 15% larger than the area of the cells around the center of an equatorial cell.

  • We will write an email to Zhi Liang to initiate discussions about this issue, as we may have misunderstood how to make use of the remapping file

 

 

Quadratic Upstream scheme

  • is ready to go.

  • some model-E users have expressed the need to be able to switch between the 2 advection schemes

 

 

 

Baselibs on MacOSX

  • Bill asks the status of this to the person in charge of doing the porting of Baselibs

  • Denis discusses with Igor the compilation of ESMF on MacOSX

 

 

 

Graphics routines

  • Denis asked if there exists some code to plot the cubed sphere grid. Bill has written some IDL code that he proposed to share

 

 

 

Scoping

  • see previous MG blog post

 

 

3

 

The only gather functionality that I can find in MPP is a routine mpp_global_field which creates a copy of a global array on every processor.  But as far as I can tell, the version of MPP we have defines "global" as only one face of the cube.  Maybe future versions will allow all 6 cube faces to be collected into a single array?  And there is no scatter routine.

 

Presumably this reflects the fact that most gathers and scatters are only for doing I/O from the root processor, so one is supposed to use the I/O functions of MPP instead.  In the cubed sphere context that would mean that we write one file per processor, because "global" arrays in our version of MPP can only cover one face of the cube (I think).  This would be fine, but a more troublesome issue is that many model E checkpoint arrays are not "fields" in the MPP sense that all their dimensions correspond to either space or time.  Some work would be required to fit model E arrays into the mold.

 

 

Bill/Tom, would MAPL's I/O and gather routines be a better fit for

model E?  Or is model E not sufficiently ESMF-like.  If not, perhaps

the parallel I/O capabilities of netcdf version 4 could be used to avoid gathers/scatters.  I have tested the latest release on discover and it seems to work fine.  As for performance, Model E will never be run on thousands of processors anyway.

 

 

The other place in model E where gathers are used is to scale and print diagnostics in serial mode. The mpp_global_field routine could be used if it could collect all 6 faces into one array.  There was probably no push to get this feature because other codes calculate all their diagnostics offline.

 

 

 

1

Notes from July 17 meeting

Posted by dgueyffi Jul 17, 2008
  • Tracers: Max has an implementation of QUS. For vertical mass fluxes, one needs to take into account that exported mass fluxes are on a Lagrangian surface

 

  • CVS: GSFC's CVS contains aquaplanet tests. Bill has to commit new things before we import into GISS CVS. Some  dependence on MAPL framework.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

  • domain decomposition/MPI:

* Max has a parallel implementation of QUS; continue implementing gathers/scatters

* need to accomodate arrays with I, J not in first positions

* in domain_decomposition.f implement wrappers for mpp_* functions

* Tom staffs someone at GSFC  to do MPI work described above

  • Conservative regridding routines (Max&Denis).

 

*Regridding routines have been tested (mostly) succesfully. We have identified an issue with the calculation of the area of the exchange grid, which we are trying to fix.

*We have an implementation for the calculation of zonal means using 2/3 cells zigzags.

  • Scoping (Denis): continue with scoping for geometrical arrays and look for potential issues due to geometry change.

 

1

QUS test cases

Posted by kelley Jul 16, 2008

 

To test my initial coding of the tracer QUS for the cubed sphere, I implanted it into the FVcubed core that runs the test cases for the NCAR colloquium.  So far, everything looks like it has been implemented correctly, and results are identical on 6, 24, 54, and 96 processors.   Large-scale features are indistinguishable from PPM results, while maxima and small-scale features show somewhat less diffusion as one would expect.

 

 

Attached are two figures showing tracer distributions at 3, 6, 9, and 12 days for test cases 1-5 and 5-0.   The numbers indicate the maximum at each timestep.

 

 

Bill, which test case would be the best at exposing errors related to the treatment at the corners of the cube?

 

 

There is a small direction-splitting bias in the QUS that shows up in

Q3 in test case 1 for example. More directional symmetry could be

achieved (with increased XY computational work), but it is worth noting

that the lat-lon version has the same splitting issues.

 

 

0

 

I have merged the mass flux updates into the cubed-sphere aqua-planet src in the GEOS ESMA repository.

 

 

You can check this code out using:

 

 

cvs -d sourcemotel.gsfc.nasa.gov:/cvsroot/esma co -r GEOSagcm-cubed-aqua_v2p1 GEOSgcm_m2

 

 

 

 

 

 

 

 

 

 

 

0

Mass Flux update

Posted by wputman Jul 15, 2008

 

I have committed the mass flux exports for the FVcubed core.  They are MFX,MFY and MFZ exports as in the lat-lon core.  The MFX,MFY pair are in the native grid orientation (cubed-sphere) on a C-grid at layer centers.  The MFZ fluxes are A-grid at layer edges.  All 3 have been vertically remapped from the floating Lagrangian vertical coordinate to the Eulerian reference coordinate.

 

 

This was committed to the head of the DycoreTestSuite repository. 

 

 

cvs -d sourcemotel.gsfc.nasa.gov:/cvsroot/astg co DycoreTestSuite

 

 

I am merging this with the Aqua-Planet code in the full ESMA repository where GEOS sits.  Once this merge is complete all checkouts of the cubed-sphere dycore should come from the ESMA repository.  I will supply a Tag when it is ready.

 

 

-Bill

 

 

 

 

 

0

Other topics for our meeting

Posted by dgueyffi Jul 11, 2008
  • Baselibs: I've installed the FVcube library in a standalone way on my Mac (i.e. my .a static library compiles without ESMF), but I will need baselibs compiled on a Mac. Bill, I've seen your posts about this topic on the GEOS MG blog. What is the status of that?

  • Domain decomposition: Max has been looking at domain decomposition routines like mpp_update_domains which does halo update. He has adapted the code to do get a column version. We'd like to identify what other routines we need to handle domain decomposition. What's going to be in ModelE's domain_decomposition? Will we have some functions in domain_decomposition.f which will be wrappers (or adapters) of the FVcube functions? Will users see the same functions as before?

  • Regridding: we are looking at ways to do zonal means along a lat=cste line. Before, this was just straightforward. Now, with the cubed sphere a lat=cste line corresponds to a zigzag of cells. Is the existing regridding library capable of handling this? We have some ideas about how to implement this, but if it already exists, why reinvent the wheel? We need this for scalar and vector fluxes.

  • Non orthogonal grid: I'm still not 100% sure about the FV treatment of momentum transport on a non ortogonal grid. I understand that for scalars, there is a sin \alpha term, but couldn't we write the explicitely what the metric term looks like for momentum? Bill, is this in you thesis or somewhere else?

  • Scoping: we are clear for the radiation, see my posts on MG. Next steps: clouds, precip, meltSI, rivers

1

plans for a branch?

Posted by kelley Jul 11, 2008

 

A question/topic for today's telecon:

 

 

What are the pros/cons of making a temporary CVS branch for the cubed sphere model E?  As opposed to adding cubed-sphere versions of existing files to the main trunk, or incrementally generalizing the existing files toward compatibility with cubed sphere requirements.

 

 

Tom, Gavin, (how) did you use branches when modifying model E to run with MPI?

 

 

Replacing DXYP(J) with DXYP(I,J) is an example of a code change which can be immediately committed to the main trunk as it is 100% backward compatible.  Other changes could be more disruptive.

 

 

A branch would allow us to work a bit faster, I think:

 

  • Disable all parts of the code which will be "broken" and bring them back one by one.  The broken bits could be hidden within  #ifndef CUBED_SPHERE for example.

  • Delay the task of making code compatible with both lat-lon and cubed grids.  In some cases it may be easier to focus on getting something which works for the cubed grid and then generalize later.

 

But there is the extra work of keeping up with ongoing changes to the main trunk.

 

 

9

 

I've been able to install netcdf, openmpi and g95 on my Mac. Now I'm trying to compile FVcubed in a standalone way using a Makefile that I derived from mkmf.template.mac.mpich2.

 

 

It looks like the tarball FVcubed.tar doesn't include all the source that is necessary for the compilation. For example, I would need the fms_plateform.h header file.

 

 

Does anybody know where I could get this file? From the FMS cvs server?

 

 

thks!

 

 

-D

 

 

 

 

 

2

Scoping of rad_drv.f

Posted by dgueyffi Jun 28, 2008

I did a 1st pass through the radiation code. See the attached document RAD_DRV.pdf (THE COMMENTS CAN ONLY BEEN SEEN IN ACROBAT!) to see the comments on the code.

I still have questions about the RADIA routine, which I will ask to various people here at GISS.

 

Here is a summary of changes to be made:

  • Some important quantities like COSZ and COSZA are functions defined everywhere which are sampled at cell centers (I,J). All other important quantities like GTEMP depend only on local cell (I,J) column information.

 

  • No special treatment for cube vertices because quantities are sampled at cell centers (we thus avoid the singularities at cube corners)

 

  • Most changes are related to adding a dependence on (I,J) instead of only J. We will rely on array definitions made in GEOM_B.f (like SINIP(I,J) or DXYP(I,J)). Some changes are necessary because Latitude = cste doesn't correspond to J=cste anymore.

 

  • It would be most useful to use an object for the grid. Bill, Tom, are f95/f2003 objects compatible with MPI&domain decomposition?

 

  • Regridding will be used to check that quantities are globally conserved on the cubic sphere. In the present case we must check that \int_ COSZ dA = 1/4

 

1 2 Previous Next

USAGov logo NASA Logo - nasa.gov