# $Id: README,v 1.17 2009-10-27 20:33:09 Ludescher-Furth Exp $ Modifications: dmc Apr 27 2009: time series of Plasma States (from trxpl) can be input. I. Instructions for testing nubeam_comp_exec: ========================================== (Instructions for running parallel NUBEAM are at the end) Two test scripts are provided with supporting data: tftr_test.csh -- script to run TFTR test tftr_input_state.cdf -- Plasma State with input data for TFTR test tftr_output_state.cdf -- Plasma State with reference output data tftr_plots.ind -- cstate script to plot Plasma State data tftr_test.msgs -- reference copy of messages produced by running script. d3d_test.csh -- script to run D3D test (data from DIII-D tokamak). d3d_input_state.cdf -- Plasma State with input data for D3D test d3d_output_state.cdf -- Plasma State with reference output data d3d_plots.ind -- cstate script to plot Plasma State data. d3d_test.msgs -- reference copy of messages produced by running script. The .csh scripts demonstrate similar examples on how to use the nubeam_comp_exec program. The comments of the nubeam_comp_exec.f90 driver contains many more details. II. Basic PROCEDURE for running the test scripts: ============================================= (II.0) Prerequisite: access to NTCC PREACT software is required; the directory containing the PREACT atomic and nuclear physics database must be identified. For example: setenv PREACTDIR /p/nubeam/transp_pll/preact/ The actual value will depend on how the NTCC PREACT software was installed and set up on your system. Note: for MPI applications or for installations where a single PREACTDIR may be shared by multiple users or jobs, it is best if the entire contents of the database are generated in advance (rather than relying on the "preact" software's capability to generate table files as requested "on the fly"). A user or installer with write access to PREACTDIR can run the program "fpreact_test" to fill out the database. This program is one of the test drivers provided with the "preact" NTCC module. To run fpreact_test, $PREACTDIR must have been initialized with preactinit. See: http://w3.pppl.gov/ntcc/PREACT/README Section 7. (II.1) cd to the directory containing the data and scripts listed above. Let stand for either "tftr" or "d3d". To run the test do this: csh -f _test.csh or, alternatively: time csh -f _test.csh > output.msgs to capture stdout; a few informative messages are written on stderr (not errors) which can be compared with _test.msgs (an exact match is not expected). The procedure runs nubeam_comp_exec serial tests using input data provided. Each test takes about 2 minutes on a modern workstation. The messages written to stdnout can be compared to the reference output files _test.msgs provided (may not always be up-to-date). (II.2) The procedure creates a subdirectory wkdir__test ... to examine the data produced by the test, cd to this directory in an xterm window, and do this: cstate @_plots.ind [e.g. "cstate @d3d_plots.ind" for the D3D case] (It is important to use a real xterm window with tek-vek graphics emulation). NTCC sglib graphics software must be installed and working, which is likely the case of cstate was built successfully). (II.3) To compare to the reference copy of the data, run the SAME command in the parent directory, in a separate xterm window. This allows plots from both the reference dataset and the newly generated dataset to be examined side by side. (II.4) Note that the RNG seed is chosen to be different on each run, and there will be noticeable variability in the outputs from run to run. This does not imply a failure of the code -- it is Monte Carlo statistical variance. The variance is usually most pronounced near the magnetic axis, where the target bins for Monte Carlo sums are smallest. (II.5) If the NTCC tr_client module is installed, the program "get_fbm" is available. These tests select an option which cause the fast ion distribution function to be written in this NetCDF file: wkdir__test/_fbm_data.cdf Where indicates a TRANSP run from which the test input data was derived ( may change over time due to updates of Plasma State datasets in the course of code development). There will only be one file that fits this filename template. The "get_fbm" program can be used to plot both deposition distribution and slowing down distribution function data from the simulation. ---------------- PHYSICS NOTE: the test scripts calculate only two NUBEAM time steps towards an evolution of fast ion slowing down distribution function(s) against a fixed target plasma. To get an equilibrated distribution, the code must be run for enough steps to cover a full slowing down time-- this takes more time than is allocated to the serial test. POSSIBLE PHYSICS APPLICATION: the MPI version, mpi_nubeam_comp_exec, can be run to produce high-statistics equilibrated slowing down distribution functions, starting from any time slice of any archived TRANSP run. The input plasma state is produced using the NTCC tr_client program "trxpl" (filename controlled by interactive user). Run specific NUBEAM namelist information also exists in more recent TRANSP run archives; these files have names of the form: _nubeam_init.dat ... but even if this data is missing, typical NUBEAM calculations can be run using default namelist settings. This creates high-statistics distribution function data for "get_fbm"; several first principles codes exist that can make use of this data. MORE ON PHYSICS APPLICATION: "trxpl" can now extract a time series of states, which [mpi_]nubeam_comp_exec can use to recalculate the accumulation of fast ions in the lead-up to a time of interest, but this time with a time evolving target based on prior experiment or simulation data. This should allow high statistics replication of TRANSP calculation results over user specified time subdomains. ---------------- (II.6) [DMC Feb 16 2009] -- test scripts now accept a variety of options. Note: the "qsub" option invokes PPPL-specific scripts and is currently only portable to franklin (nersc). The following email excerpt summarizes the new options: -- ./d3d_test.csh ... runs as before: --> NPTCLS=20000; --> step count 2x0.010, i.e. 2 0.01s steps; --> new RNG seed on each run; --> runs serial job immediately on current machine. -- ./d3d_test.csh hold --> as above, RNG seed in namelist is used. -- ./d3d_test.csh init_only hold --> RNG in namelist is used; only initialization is done; --> can now set environment variables and run with debugger: o setenv NUBEAM_ACTION step o setenv NUBEAM_WORKPATH wkdir_d3d_test o setenv NUBEAM_REPEAT_COUNT 2x0.010 o totalview nubeam_comp_exec.dbx ... -- ./d3d_test.csh -nptcls 1000 --> standard run but with nptcls=1000 instead of 20000. -- ./d3d_test.csh -nptcls 5000 -repeat 15x0.010 qsub --> test with 5000 ptcls, 15 0.01s steps, serial batch job; --> "qsub" option for PPPL PBS submit with PPPL PBS script. -- ./d3d_test.csh -nptcls 100000 -repeat 15x0.010 -ncpu 8 qsub [-q ] --> 100000 ptcl 8cpu MPI batch job, 15 0.01s steps --> "qsub" option for PPPL or Franklin PBS submit with PBS script. --> -q debug = submit into pbs queue debug (Franklin) -- ./d3d_test.csh -ncpu 8 --> 8cpu job run immediately in current environment; --> must be in "interactive" MPI queue with 8 cpus assigned; --> in theory MPI enabled totalview can run (I haven't tried). In general this should give some added flexibility for using standalone nubeam_comp_exec and mpi_nubeam_comp_exec for NUBEAM testing-- but the "qsub" option uses PPPL-specific TRANSP job control scripts. MPI runs at other sites will need site-specific script programming. If "portable" scripts can be developed, these could be added to the NTCC nubeam_comp_exec test program distribution. Note for Franklin: To use these scripts, define environment variables NERSC and MPI_CMD (e.g.: setenv NERSC 1; setenv MPI_CMD aprun). Be sure scripts and executables are in your PATH, and TRANSP_LOCATION is defined. For testing: PATH = /LINUX/test:/etc TRANSP_LOCATION = /etc III. Running nubeam_comp_exec on other data; nbx_driver script. ========================================================== It may be useful to use nubeam_comp_exec to produce fast ion distribution functions using high statistics, based off data from normal (i.e. modest statistics) TRANSP runs. TRANSP runs archive the data needed to drive nubeam_comp_exec at any time of interest. In addition, modern runs store namelist data suitable for initialization of nubeam_comp_exec: _nubeam_init.dat (in file based archives) NB_NAMELIST (MDS+ nodename for this data) The "trxpl" program (part of the xplasma NTCC module) can be used to extract Plasma State time slices or constrained range time series as input data. Recommended procedure: create a working directory to contain the input data for nubeam_comp_exec. Then, use the "nbx_driver" script to carry out the run. Arguments to "nbx_driver" control MPI access, handling of output data, etc. A summary follows: nbx_driver command line arguments: init_only -- stop after INIT (e.g. for debugging) restart -- start from existing state files qsub -- run as PBS job (MPI if #procs > 1) hold -- hold RNG seed fixed (instead of setting from sys clock) (this means "nseed" in the init namelist is used). command line arguments with values: -ncpu -- if >1 use MPI with indicated number of processors -nptcls -- use standard namelist with nptcls & nptclf modified -init -- use indicated init namelist -step -- use indicated step namelist -input_state -- use indicated input state file -input_list -- use LIST of input states (time series) -output_state -- create indicated output state file -plot_script -- copy indicated plot script file -wkpath -- workpath to use -repeat -- specify number & duration of time steps -postproc -- post-processing (default: fbm_write) example: nbx_driver qsub hold -ncpu 4 -input_state 37065R12_state.cdf \ -output_state tftr_output_state.cdf \ -init 37065R12_namelist_init.dat \ -wkpath try1 \ -repeat 10x0.01 This example would use the indicated Plasma State file and initialization namelist, and produce the indicated output after taking 10 0.01 second simulation steps. The job would be a 4 cpu MPI job running under PBS (some features tested only at PPPL; may need work to run correctly on other systems). The random number seed would be taken from the init namelist. IV. Summary guide to nubeam_comp_exec: ================================== Program execution is controlled by environment variables as well as input data. The action of the scripts described in section II. is to place the input data into $NUBEAM_WORKPATH and set the necessary environment variables. This section provides information to allow a user to do this "by hand" or in a custom developed site-specific script code. The environment variables used by [mpi_]nubeam_comp_exec are: NUBEAM_WORKPATH -- path to directory containing all input and output data files. If undefined the current working directory is used-- NOT RECOMMENDED! There are a lot of working files. NUBEAM_ACTION -- action to be taken on this execution of nubeam_comp_exec: This environment variable must be defined. INIT -- initialize; new RNG seed is selected from system clock. INIT_HOLD -- initialize; RNG seed from namelist is used unchanged. If the INIT files namelist specifies step data, then a time step advance (or $NUBEAM_REPEAT_COUNT steps) are performed in the same execution of nubeam_comp_exec. The RNG software used is the one provided in the NTCC by Charles Karney. STEP -- advance a time step. The time step length (seconds) is the ps%t1 - ps%t0, where ps is the input plasma state data; more on input files below. BACKUP -- store a backup copy of NUBEAM's internal state: files containing saved field data and Monte Carlo particle lists. This represents NUBEAM private state data, not the same as the NTCC Plasma State data. RETRIEVE -- restore a backup copy of NUBEAM's internal state; this makes the backup copy the current state. This allows a calculation to be "rolled back" to a previously saved BACKUP time point. (All actions can be run serially or in MPI mode). NUBEAM_REPEAT_COUNT -- specify number and duration of NUBEAM time steps. If running against a single, fixed target plasma state, the environment variable can be used to set up a "run to equilibrium" against this single target state. If a time series of input states are provided, this variable specifies how far to run (or if a restart how much further to run) through the time series. Syntax: x; Examples: 10x0.010 -- ten 0.01s steps 20x0.025 -- twenty 0.025s steps 1x0.005 -- one 0.005s step If undefined, and a single target Plasma State is input: execute a single step defined by the input Plasma State. This would be the normal mode in a full time dependent simulation which uses [mpi_]nubeam_comp_exec as a sub- procedure. If undefined, and a time series of Plasma States are input: a time step size is chosen according to the average time spacing of the input Plasma States. If sawteeth are indicated, the time step size is adjusted to meet these flush. The number of time steps is set as needed to run NUBEAM through the entire time series of input states. Note that no intermediate states are saved (to get this one must use NUBEAM_REPEAT_COUNT to control when to stop the time advance of the calculation). NUBEAM_POSTPROC -- postprocessing options: FBM_WRITE -- write a distribution function file _fbm_data.cdf where is the runid label string given in the Plasma State input file. The file is written in $NUBEAM_WORKDIR. FBM_WRITE: -- as above, but with the output filename reset to the value provided after the colon. For example, FBM_WRITE:foo.cdf ... would have the distribution function data writtent to foo.cdf. The file is written in $NUBEAM_WORKDIR. SUMMARY_TEST -- produce additional printed output on stdout, by calling "nbdrive_summary" and "nbdrive_summary_2d" subroutines which demo access to NUBEAM output details not accessible through the Plasma State output dataset. NONE -- no post-processing -- this is the default (i.e. if the environment variable is undefined), and the normal choice for time dependent simulation. Files: input and output files are named through short namelist files read during nubeam_comp_exec execution. These files have fixed names: $NUBEAM_WORKPATH/nubeam_init_files.dat -- names of additional files used on an INIT execution. Example: &NUBEAM_FILES input_plasma_state = "my_old_state.cdf" plasma_state_update = "state_changes.cdf" init_namelist = "nubeam_init_example.dat" / alternate form (to indicate a file containing a list of a time series of Plasma States as generated by trxpl): &NUBEAM_FILES input_plasma_state = "list:my_states.list" plasma_state_update = "state_changes.cdf" init_namelist = "nubeam_init_example.dat" / $NUBEAM_WORKPATH/nubeam_step_files.dat -- names of additional files used on a STEP execution. Example: &NUBEAM_FILES input_plasma_state = "my_old_state.cdf" plasma_state_update = "state_changes.cdf" step_namelist = "nubeam_step_example.dat" / alternate form (to reference the list of states seen at INIT time): &NUBEAM_FILES input_plasma_state = "list" plasma_state_update = "state_changes.cdf" init_namelist = "nubeam_step_example.dat" / Additional input/output files are described in the comments of the nubeam_comp_exec.f90 source code. The most commonly used files are: input_plasma_state -- the input data, in the Plasma State format. This is documented in the NTCC in the Plasma State module under the Module_Library (catalog) link at http://w3.pppl.gov/NTCC. Also documented at http://www.cswim.org in the component description section, the software supporting this format was developed as part of the SWIM SciDAC project. The data includes, in summary: -- plasma MHD equilibrium -- plasma parameters (i.e. temperatures and densities) -- detailed machine description including neutral beam geometries -- slots for receiving beam heating, torque densities, fueling sources, deposition charge exchange neutral sources, etc... For testing purposes, Plasma State instances suitable for input to nubeam_comp_exec can be extracted from any archived TRANSP time slice, using the the NTCC program "trxpl". At INIT time, the value of this namelist variable can have the form "list:" In this case, the code will open in the $NUBEAM_WORKPATH directory, to find a list of Plasma States in a time series. The states combined with this list file can be generated from TRANSP archives using "trxpl". At STEP time, the value of this namelist variable can have the form "list" which means "use the list read in at INIT time". init_namelist -- NUBEAM-specific information for initialization. The NUBEAM radial grid and velocity space grids for distribution function output are defined here. Usually, the Monte Carlo particle list sizes, nptcls (for beams) and nptclf (for fusion products) are defined here. Most items can ONLY be defined at initialization time and cannot be changed time dependently. Recent TRANSP run (starting late in 2008) archives include a copy of this dataset as used in the TRANSP NUBEAM time dependent simulation: _nubeam_init.dat for TRANSP run . If this is missing, useful simulations can still be run, using the NUBEAM module documentation to get information on the meaning of the various control options. In almost all runs, the vast majority of controls can be defaulted. step_namelist -- NUBEAM-specific information for time step: the subset of the init_namelist that is allowed (but not required) to vary in time. For example: nptcls, the current Monte Carlo particle list size for Neutral Beam deposited fast ions, and, nptclf, the current Monte Carlo particle list size for Fusion Product deposited fast ions. These could be (and usually are) set in the init_namelist and left unchanged through an entire multi-step simulation, but, it is possible to modify these values during the course of a multi-step run. plasma_state_update -- these are outputs, CHANGES to the Plasma State computed by the NUBEAM time step(s) in [mpi_]nubeam_comp_exec (or, at initialization time, the profiles cleared to zero). The Plasma State utility "update_state" (distributed in the SWIM svn or in the NTCC Plasma State source module) can be used to "merge" the changes into a new, updated Plasma State. The values of these namelist variables contain just the actual filenames. All files are read from or written to $NUBEAM_WORKPATH. The scripts described in section II manipulate these files, e.g. to specify the desired plasma state input data for a particular test run, and/or create modified namelists to control NPTCLS and NPTCLF. ------------------------- Note to maintainer: Use the script "update_ref_files.csh" to replace the current reference files d3d_test.msgs d3d_output_state.cdf tftr_test.msgs tftr_output_state.cdf with fresh copies based on a run of the current version of the code. V. Instructions for running parallel NUBEAM ============================================ (V.1) Interactive parallel runs ---------------------------- (V.1) Gain access to the interactive parallel queues. At PPPL, use the "use" command to request the desired number of processors. (V.2) Make sure the prerequisites for serial job are met (see above under II.0) (V.3) cd to the directory containing the data and scripts listed in I-- or copy the files to a scratch disk directory and cd to this directory. [In the parallel context, it is preferable to work on a scratch disk, directly attached to the compute server (at PPPL at least). In the parallel operation, each participating processor writes its particle list each time step. It is preferable to have this directory be on a local disk (i.e. not NFS) to improve performance. As the directory structure on the "child processes" mirrors that on the root process, it is desirable to work on a scratch disk on the root process. At PPPL this means a directory under /local. At other sites the policy or interpretation may differ; parallel file systems such as Lustre may play a role]. (V.4) As in the serial case, let stand for either "tftr" or "d3d". To run the parallel test, do this: csh -f _test.csh -ncpu or, alternatively: time csh -f _test.csh -ncpu >& output.msgs & where is the number of processes. Note that the only difference between serial and parallel procedures is the argument "-n ". (V.5) Steps (II.2) through (II.5) remain the same for the parallel operation. (V.6) see section (II.6) for discussion of "qsub" options for batch job submission (available at PPPL and franklin only).