Go to the bottom of this page. See the search engine and sub-section links.
Go to next page Go to previous page Go to top of this section Go to top page Go to table of contents

Previous Section Headers

User's Guide to NCAR CCM3.6 Search Page


3. CCM3.6 Internals


3.1 Design Philosophy

Large numerical model codes are notorious for being difficult to read, understand, and modify. In 1989, an international committee of geophysical modelers wrote some coding rules designed to address these problems, with a specific emphasis on making parameterizations "plug-compatible". Their paper, "Rules for Interchange of Physical Parameterizations" (Kalnay et al., 1989), provided valuable guidance in the design of the previous Community Climate Model, CCM2 (Bath et. al., 1992).  Use of these coding rules was continued in CCM3 and CCM3.6.

The control structure of CCM3 with respect to initialization, time integration, and gridpoint to spectral transformations is similar to that of CCM2  Some changes were made in CCM3 to allow a straightforward coupling of the atmospheric model to land, ocean, and sea ice components.  For example, the latitude loop which invokes the physical parameterizations was split into two routines to facilitate exchange of flux information between models. Modifications were also made in CCM3 and CCM3.6 to allow the code to be run on architectures other than Cray PVP (parallel vector processor) machines.  In addition to Cray PVP machines,  CCM3.6 is supported on SGI, SUN, and HP architectures in both shared-memory and distributed-memory (MPI) variants. Hooks also exist for T3D/E and RS6K machines, but the former no longer works with the migration to netCDF (described below), and the latter is no longer supported because RS6K platforms are no longer available to the NCAR Climate Modeling Section.

One important change from CCM3 to CCM3.6 is that the binary format of most input datasets was changed to the self-describing binary format netCDF.  One major advantage of netCDF is that the binary format is independent of the machine on which the code is being run.

CCM3 has retained the potential to be run with some data structures stored out-of-core.  This feature is probably only of value on machines which provide an extremely fast I/O device, such as the Cray solid state storage device (SSD).  Thus, the model also has the capability of retaining some or all of the main model buffers in-core.  Namelist variables which control the in-core or out-of-core nature of these buffers are INCORHST, INCORBUF, and INCORRAD. As in CCM2, the main model buffer of CCM3 consists of variables that must be carried at more than one time level, more than one latitude scan, or for storage contiguity purposes.

In CCM3, memory is managed with the following goals in mind:

To accommodate the semi-Lagrangian transport code, certain variables need to be retained in main memory as three-dimensional arrays (dimensioned longitude by vertical level by latitude) at two time levels. These arrays reside in named common /com3d/, and therefore reside in main memory throughout a run.

Temporary storage, such as that for Fourier coefficients during the spectral transform, is maintained as locally declared arrays on the "stack" (the automatic dynamic memory management mechanism). These data structures are described in "Data Structures" .

Within the time integration loop inside stepon, only the control routines linemsbc, linemsac, and spegrd interface directly with the model buffers. At the beginning of these routines, Cray pointers are used to associate fields in the model buffers with mnemonically named field arrays. The physical parameterization routines called by these control routines need only pass these fields through argument lists, without clumsy indexing directly into the buffers.  Plug-compatibility is attained through mnemonics and by using the argument list instead of common blocks wherever possible for the communication of data between routines.

Most of the parallel processing is accomplished at the latitude loop level, with an arbitrary number of processors sharing the work. Data structures not dimensioned by latitude are allocated on the stack, so that each processor has its own copy of writable memory for these arrays. I/O to the out-of-core scratch files is implemented as Fortran direct access, synchronous I/O to accommodate the random order of latitude loop execution. If running multitasked, all data accumulations, e.g. Gaussian quadrature, occur in the same order as if running single-threaded to guarantee identically reproducible results for multiple runs.

The CCM3 history file interface is designed so users can easily record whatever data they like on the output history files. Fields that appear in the Master Field List, generated at initialization time by subroutine bldfld, may be included on the history file either by default or by namelist request. A field is recorded in the history file buffer by calling subroutine outfld.

Provision is made for recording instantaneous or averaged, as well as maxima or minima. Fields are written to the history file within a multitasked loop over latitudes. Due to multitasking and the resulting random order of execution of the latitude loop, the sequentially written history file may be randomly ordered by latitude. Thus, each latitude record contains the latitude index as its first word.  Indices increase from south to north. 


 Go to the top of this page. See links to previous section headers.
Go to next page Go to previous page Go to top of this section Go to top page Go to table of contents

 Search for keywords in the CCM3.6 Users GuideSearch page

Questions on these pages can be sent to... erik@ucar.edu .


$Name: ccm3_6_6_latest2 $ $Revision: 1.38.2.1 $ $Date: 1999/03/25 21:38:19 $ $Author: erik $