From: Stephen C. Jardin
Sent: Saturday, January 20, 2007 6:59 AM
Cc: SciDAC Center for Extended MHD Modeling; Alan Glasser;
Subject: Nonlinear-MHD FY07 Q1 Quarterly Milestone Report Submission
                                 Nonlinear-MHD FY07 Q1 Quarterly Milestone Report Submission

1. M3D to study the effect of isotropic viscosity and thermal conductivity on results of CDX-U nonlinear simulation. (J. Breslau, C. Sovinec)

Isotropic viscosity and thermal conductivity terms have been implemented in M3D. Convergence studies were performed to determine the effects of these terms on linear growth rates of the n=1 mode for the CDX equilibrium with q(0)=0.82. For cases using the artificial sound term for parallel heat conduction, switching to isotropic viscosity was found to increase the growth rate by only about 6%, while switching to isotropic heat conduction did not result in a significant change in the growth rate. For cases without parallel heat conduction, switching to isotropic viscosity nearly doubled the n=1 growth rate, while switching to isotropic heat conduction as well in these cases reduced it by about 13%.

The nonlinear sawtooth run for the q(0)=0.92 equilibrium was repeated with isotropic viscosity switched on and 10 toroidal modes retained.  The initial linear growth rate was found to be 20% higher than with the original perpendicular viscosity. The sawtooth period was reduced by 20% relative to the original case. Separation between kinetic energies in successive toroidal modes was less than before during termination of the crashes, but greater than before during the recovery periods in between. Stochasticity of the magnetic field following the crash was somewhat reduced. A numerical instability occurred in the n=10 mode as its energy dropped to near the background noise level, but this is not believed to have affected these results. A more implicit treatment of the leading-order part of the phi-derivative term in the isotropic operators is being considered to try to alleviate the instability.

2. Decide upon appropriate equilibrium for the ELM test cases. (S. Kruger, P. Snyder)

Initial benchmarks of NIMROD and ELITE were promising in that the
eigenfunctions agreed qualitatively on poloidal mode structure and
radial width, and the growth rates were qualitatively in agreement.  The
NIMROD results were sensitive to the choice of diffusivities chosen, and
could bracket the ELITE results by varying the appropriate parameters.
However, further work as part of the 2005 and 2006 milestone efforts
showed qualitatively different results in the linear growth rates versus
toroidal mode number, especially for peeling-dominant equilibria.  This
motivates revisiting the ideal code/resistive code benchmark in order to
more thoroughly understand how non-ideal codes approach the ideal limit,
especially for peeling-ballooning modes.

The first task is to decide upon the equilibria.  Prior experience of
benchmarking ELITE, GATO, and DCON has shown that difference in
equilibria mapping can account for ~5-10% differences in growth rates,
and that when an inverse equilibria is used, the differences were
reduced to less than 2%.  For this reason, our goal was to try and use
an inverse solver such as TOQ.  The difficulties with this approach is
that NIMROD requires a Grad-Shafranov solution in the vacuum region.
The solution was to modify TOQ to solve for this region as well, which
is possible for weakly-shaped cross sections.  Another numerical
difficulty is that NIMROD must transition for a low resistivity plasma
to a high resistivity plasma over a narrow region.  To minimize the
influence of the resistive transition region on the NIMROD growth rates,
we are choosing an equilibria that has a wider pedestal width than those
normally studied.  The equilibria with the above properties satisfies
all of the requirements for the individual codes, and goals of the

We have modified TOQ and solved for a free-boundary equilibria, but have
yet to have a converged solution for one that has all of the features
that have been decided upon.

3. Document zero guide field GEM results from M3D-C1, NIMROD, and
SEL on web page: (S. Jardin, A. Glasser)

This has been done.  The GEM reconnection problem is described in [J. Birn, J. F. Drake, M. A. Shay, et al., J. Geophys. Res. 106 3715 (2001)]. The CEMM is using this well-defined 2D test problem as one of it's benchmark problems. We present results on the web site from several codes for both the "resistive MHD" model, and for the "two-fluid" model, which has the Hall terms added to the generalized Ohm's law.

The form of the resistive MHD equations used, and the parameters for this problem are given on the web site . We present comparisons for two values of the fluid viscosity. In the first, the dimensionless viscosity is 10 times the dimensionless resistivity. This is the "high viscosity case" and is the one described in the writeup. In the second, the viscosity is reduced by a factor of 100, so that it is 1/10 times the dimensionless resistivity. This is the "low viscosity case". References for the codes and for the resolution settings are given near the bottom of this page.  The results for the 3 codes plus the JFNK-FD (Samtaney et. al.) agree to within a few percent for the entire time history.

For the two-fluid model, a complete specification of the form of the 2-fluid equations used are given on the web page, as well as a comparison of the kinetic energy vs time for the 3 codes.  The agreement is very good, say within 10% for the entire time history, but not as good as the agreement in the resistive MHD case.  This likely reflects the differences in the hyper-resistivity used in the different codes, and is still being studied.

4. Perform initial scaling of M3D on Jaguar up to 10,000 processors and identify bottlenecks. ( J. Chen, E. Held)

M3D is now fully functional and runs efficiently on the whole machine(5120 nodes, 10,240 processors) of the Jaguar Cray XT3 computer at ORNL. Impressive weak and strong scaling results have been obtained using only one of the two processor cores on each node. Excess of 60-80% overall efficiency was observed when going from 64 to 5120 nodes. This result was obtained using the "Hypre" algebraic multigrid solver within PETSc for solving the compute intensive linear equations in M3D each timestep that arise from the elliptic equations. More details of the scaling results can be found at