Previous Section Headers
2.1.2 Creating and Running the Executable
CCM3.6 may be built to run in one of two modes. It can run as a stand-alone
executable where the Land Surface Model (LSM) and the ocean component are
contained as part of the executable (see the "LSM
User's Guide" for a complete description of using the LSM). This will
be referred to as stand-alone mode. In this mode, the model's ocean component
can be run in one of two ways: in data ocean or slab ocean mode. The data
ocean configuration uses climatological sea surface temperatures, whereas
the slab ocean configuration uses data for ocean mixed layer depth and
ocean mixed layer heat flux.
Alternatively, CCM3.6 can be run as a component in a system of geophysical
models (CSM) in which two-way interaction is enabled through the use of
a flux coupler (Bryan et al., 1996). In this mode, the LSM, ocean
and sea-ice models are run as separate executables that communicate with
the atmospheric component via the flux coupler. This will be referred to
as flux-coupled mode. The generation and execution of a flux-coupled executable
will not be addressed in this document. A separate document addressing
these issues is forthcoming.
To create a stand-alone executable, the user must build the model for
the desired target architecture, type of model dynamics, ocean configuration,
and if applicable, distributed memory (SPMD) implementation.
Three target architectures are supported: Cray, SGI and Sun. Two types
of dynamics are possible, Eulerian or Semi-Lagrangian (the latter, however,
is still experimental and will be supported in later releases). Furthermore,
either data ocean or slab ocean configurations can be used. Finally, the
distributed memory implementation can only be used on non-Cray architectures
(see "Running the Distributed Memory
Implementation" ).
All files relating to the creation and execution of the stand-alone
executable are contained in the ccm/bld directory. There are four
architecture-dependent scripts in this directory which provide step-by-step
instructions for building and running the executable. These build scripts
create a model executable, determine the input datasets and construct the
input model namelists. It is assumed that the user will run the appropriate
build script from the ccm/bld directory and that all
input datasets have been untarred . The build script will then create
the directory ccm/run. The build scripts can be run with
minimal user modification, assuming the user has untarred the input
datasets in the directory ccm/data. These scripts are provided
as examples to help the novice user get CCM3.6 up and running as quickly
as possible. Users who are already familiar running the CCM will
want to modify these scripts or use their build procedure.
The default model configuration resulting from each build script is summarized
below.
Script name
|
Synopsis
|
build.cray_ncar.csh |
T42 resolution, Eulerian dynamics, data ocean
component, no SSD, NCAR Cray computers |
build.cray_nonncar.csh |
T42 resolution, Eulerian dynamics, data ocean
component, no SSD, non-NCAR Cray computers |
build.noncray.csh |
T42 resolution, Eulerian dynamics, data ocean
component, SGI or Sun computers |
Build.noncray_spmd.csh |
T42 resolution, Eulerian dynamics, data ocean
component, distributed memory (SPMD) implemented, SGI or Sun computers |
Each script can be divided into four functional sections: 1) specification
of environment variables; 2) creation of three header files and a directory
search path file needed to build the model executable; 3) creation of the
model namelist; and 4) execution of the model.
2.1.2.1 Specification of environment variables
The following environment variables must be set from within the build script.
Environment Variable
|
Synopsis
|
MODEL_SRCDIR |
The full pathname for the source code directory
hierarchy.
Default setting is full pathname for ccm/src |
MODEL_EXEDIR |
Full pathname for the directory where the
model executable will reside (object files will be built in the directory
$MODEL_EXEDIR/obj).
Default setting is full pathname for ccm/run |
MODEL_DATDIR |
Full pathname for the directory where the
input datasets may reside (see "Model
Input Datasets").
Default setting is full pathname for ccm/data |
CPU |
Target architecture. Must be set to CRAY,SGI
or SUN (in uppercase).
Default setting for Cray build scripts is CRAY.
Default setting for non-Cray build scripts is SGI. |
NCAR_CRAY |
Additional environment variable needed if
running on a Cray. Should be set to TRUE if running on an NCAR
CRAY, otherwise should be set to FALSE.
Set automatically by Cray build scripts. |
SSD |
Additional environment variable needed if
running on a Cray. Should be set to TRUE if using the solid-state
storage device (SSD), otherwise should be set to FALSE.
Default setting is FALSE |
LIB_NETCDF |
Full pathname for directory containing the
netCDF library.
Default setting is /usr/local/lib |
INC_NETCDF |
Full pathname for directory containing netCDF
include files.
Default setting is /usr/local/include |
MAXCPUS |
Maximum number of processors used for multitasking.
Should not exceed the number of physical CPU's on the machine.
Default setting is 16 for Crays and 2 for non-Crays. |
LIB_MPI |
Full pathname for directory containing the
MPI library.
Default setting is /usr/local/lib |
INC_MPI |
Full pathname for the directory containing
the MPI include files.
Default setting is /usr/local/include |
2.1.2.2 Creation of header files and directory search path
Using the values of the above environment variables, the build script creates
the header files misc.h, params.h, and preproc.h and
the directory search path file Filepath. These files are placed
in the directory $MODEL_EXEDIR/obj. To modify these files the
user should edit their contents from within the build script rather than
attempt to edit the files directly since the build script will overwrite
the files upon its execution. The use of these files by gnumake
is discussed in "Details of gnumake procedure".
The contents of each of these files are summarized below.
2.1.2.3 Details of gnumake procedure
cpp directives of the form #include, #if defined,
etc., are used to enhance portability, and allow for the implementation
of distinct blocks of platform-specific code within a single file. Header
files, such as misc.h, are included with #include statements
within the source code. When gnumake is invoked, the C preprocessor
includes or excludes blocks of code depending on which cpp tokens
have been defined. cpp directives are also used to perform textual
substitution for resolution-specific parameters in the code. The format
of these cpp tokens follows standard cpp protocol in
that they are all uppercase versions of the Fortran variables, which they
define. Thus, a code statement like
parameter(plat = PLAT)
will result in the following processed line (for standard T42 resolution).
parameter(plat = 64)
gnumake generates a list of source and object files using each
directory listed in Filepath. For each source file, gnumake
invokes cpp to create an dependency file in the directory $MODEL_EXEDIR/obj.
For example, file.F will have a dependency file, file.d.
If a file listed as a target of a dependency does not exist in $MODEL_EXEDIR/obj,
gnumake searches the directories contained in Filepath, in
the order given, for a file with that name. The first file found satisfies
the dependency. If user-modified code is to be introduced, Filepath
should contain, as the first entry, the directory containing the user code
(see "Modifying the Code" ).
A parallel gnumake is achieved in the build scripts by using
gnumake with the -j option, which specifies the number of
jobs (commands) to run simultaneously. Caution should be exercised on Cray
machines, because if this number is set too high, it is possible to exceed
memory and/or number of process limits. The current default
value in the build script is 4.
2.1.2.4 Running the Model
In addition to building the model executable, the build scripts allow the
user to define the appropriate input datasets, generate the model namelist
and run the executable. Input datasets and model namelists are discussed
in "Model Input Datasets" and
"Model Input Variables", respectively.
If the code is targeted to run multitasked on a CRAY PVP machine, the environment
variable $NCPUS will also be set. Its value should not exceed
the number of physical CPU's on the machine. The build script sets
$NCPUS to the script environment variable, $MAXCPUS.
For CRAY architectures, the build script can either be run in batch
or interactive mode. Since climate integrations are often run for long
periods of time it may be more convenient in such cases to use a batch
facility to complete the run. The following "QSUB" directives at
the top of the build.cray_ncar.csh or build.cray_nonncar.csh
script must be modified accordingly. Note that the following memory and
SSD limits are sufficient for running at T42 horizontal resolution multitasked
on eight processors. Refer to "Shared-Memory
Management" for more information on memory requirements.
NQS setting
|
Synopsis
|
#QSUB -q reg |
Job class, regular |
#QSUB -lT 600 |
CPU time limit for one simulated day, 600
seconds |
#QSUB -lM 37Mw |
Main memory requirement without SSD, 37 megawords
(comment if SSD is to be used) |
#QSUB -lM 23Mw |
Main memory requirement with SSD, 23 megawords
(uncomment if SSD is to be used) |
#QSUB -lQ 14Mw |
SSD memory requirement, 14 megawords
(uncomment if SSD is to be used) |
#QSUB -eo |
Directs stdout and stderr to the same file |
#QSUB -s /bin/csh |
Selects shell |
#QSUB -x |
Export environment variables |
The script may then be submitted to the batch facility NQS (Network
Queuing System) via the command
qsub build.cray_ncar.csh
Sub Sections
-
2.1.2.1 Specification of environment variables
-
-
2.1.2.2 Creation of header files and directory search path
-
-
2.1.2.3 Details of gnumake procedure
-
-
2.1.2.4 Running the Model
-
Search page
Questions on these pages can be sent to...
erik@ucar.edu .
$Name: ccm3_6_6_latest3 $ $Revision: 1.43.2.1 $ $Date: 1999/03/25 21:37:47 $ $Author: erik $