Computational astrophysics
Computational astrophysics is the use of numerical methods to solve research problems in astrophysics on a computer.
Numerical methods are used whenever the mathematical model describing an astrophysical system is too complex to solve analytically (with pencil and paper). Today, it is difficult to find examples of research problems that do not use computation.
Solutions generated by numerical methods are generally only approximations to the exact solution of the underlying equations. However, much more complex systems of equations can be solved numerically than can be solved analytically. Thus, approximate solutions to the exact equations found by numerical methods often provide far more insight than exact solutions to approximate equations that can be solved analytically. For example, time-dependent numerical solutions of fluid flow in three dimensions can exhibit behavior which is not expected from one-dimensional analytic solutions to the steady-state (time independent) equations.
The increase in computing power in the last few decades has meant that an increasingly larger share of problems in astrophysics can be solved on a desktop computer. However, the most computationally intensive problems in astrophysics are still limited by the memory and floating-point speed of the largest high-performance computer (HPC) systems available (e.g. computers on the top500 list). The use of HPC is necessary to maximize the spatial, temporal, or frequency resolution of solutions, to include more physics, or when large parameter surveys are required to understand the statistics.
Traditionally, the most important applications of computation in astrophysics have been in the areas of:
- Stellar structure and __AUTOLINKER{evolution} evolution.
- Radiation transfer and stellar atmospheres.
- Astrophysical fluid __AUTOLINKER{dynamics} dynamics.
The dynamics of most of the visible matter in the universe can be treated as a compressible fluid. Time-dependent and multidimensional solutions to the fluid equations, including the effects of gravitational, magnetic, and radiation fields, require numerical methods. A vast range of problems are addressed in this way, from convection and dynamo action in stellar and planetary interiors, to the formation of galaxies and the large scale structure of the universe.
- Planetary, stellar, and galactic dynamics.
The diverse set of mathematical models encountered in astrophysics means that a very wide range of numerical methods are necessary. These range from basic methods for linear algebra, nonlinear root finding, and ordinary differential equations (ODEs), to more complex methods for coupled partial differential equations (PDEs) in multi-dimensions, topics which span the entire content of reference books such as Numerical Recipes.
However, there are several numerical methods used in astrophysics that deserve special mention, either because they have such wide use in astrophysics, or because astrophysicists have made significant contributions to their development. These methods are considered in the following subsections.
The equations of stellar structure are a set of ODEs which define a classic two-point boundary value problem. Analytic solutions to an approximate system of equations exist in special cases (polytropes) but these are of limited application to real stars. Numerical solutions to the full system of equations were first computed using shooting methods, in which boundary conditions are guessed at the center and surface of the star, the equations are integrated outwards and inwards, and matching conditions are used at some interior point to select the full solution. Shooting methods are laborious and cumbersome; today modern stellar structure codes use relaxation schemes which find the solution to the finite-difference form of the stellar structure equations by finding the roots of coupled, non-linear equations at each mesh point. A good example of a public code that uses a relaxation scheme is the EZ-code, based on Eggleton's variable mesh method. The evolution of stars are then computed by computing stellar models at discrete time intervals, with the chemical composition of the star modified by nuclear reactions in the interior.
Calculating the emergent intensity from an astrophysical system requires solving a multidimensional integral-differential equation, along with level population equations to account for the interaction of the radiation with matter. In general the solution is a function of two angles, frequency, and time. Even in static, plane-parallel atmospheres, the problem is two-dimensional (one angle and frequency). However, the most challenging aspect of the problem is that scattering couples the solutions at different angles and frequencies. As in the stellar structure problem, relaxation schemes are used to solve the finite difference form of the transfer equations, although specialized iteration techniques are necessary to accelerate convergence. Monte-Carlo methods, which adopt statistical techniques to approximate the solution by following the propagation of many photon packets, are becoming increasingly important. The problem of line-transfer in a moving atmosphere (stellar wind) is especially challenging, due to non-local coupling introduced by Doppler shifts in the spectrum.
There are two tasks in an N-body code: integrating the equations of motion (pushing particles), and computing the gravitational acceleration of each particle. The former requires methods for integrating ODEs. Modern codes are based on a combination of high-order difference approximations (e.g. Hermite integrators), and symplectic methods (which have the important property of generating solutions that obey Liouville's Theorem, i.e. that preserve the volume of the solution in phase space). Symplectic methods are especially important for long term time integration, because they control the accumulation of truncation error in the solution.
Calculation of the gravitational acceleration is challenging because the computational cost scales as N(N-1), where N is the number of particles. For small N, direct summation can be used. For moderate N (currently <math>N \sim 10^{5-6}</math>), special purpose hardware (e.g. GRAPE boards) can be used to accelerate the evaluation of <math>1/r^2</math> necessary to compute the acceleration by direct summation. Finally, for large N (currently <math>N \geq 10^{9-10}</math>), tree-methods are used to approximate the force from distant particles.
Solving the equations of compressible gas dynamics is a classic problem in numerical analysis which has application to many fields besides astrophysics. Thus, a large number of methods have been developed, with many important contributions being made by astrophysicists. A complete review of all the methods is beyond the scope of this discussion (see Laney 98). For solving the equations of compressible fluid dynamics, the most popular methods include
- finite-difference techniques (which require hyper-viscosity to smooth discontinuities),
- finite-volume methods (which often use a Riemann solver to compute upwind fluxes),
- operator split methods (which combine elements of both finite-differencing and finite-volume methods for different terms in the equations),
- central methods (which often use simple expressions for the fluxes, combined with high-order spatial interpolation), and
- particle methods such as smooth particle hydrodynamics (SPH, which integrates the motion of discrete particles to follow the flow, see Monaghan 92).
Computational astrophysics is necessarily inter-disciplinary, involving aspects of not only astrophysics, but also numerical analysis and computer science.
Numerical Analysis is a rigorous branch of mathematics concerned with the approximation of functions and integrals, and the approximation of solutions to algebraic, differential, and integral equations. It provides tools to analyze errors that arise from the approximations themselves (truncation error), and from the use of finite-precision arithmetic on a computer (round-off error). Convergence, consistency, and stability of numerical algorithms are all essential for their use in practical applications. Thus, the development of new numerical algorithms to solve problems in astrophysics is deeply rooted in the tools of numerical analysis.
Computational science and computer science differ. The former is the use of numerical methods to solve scientific problems. The latter is the study of computers and computation. Thus, computer science tends to be more orientated towards the theory of computers and computation, while scientific computation is concerned with the practical aspects of solving science problems. Still, there is a large degree of overlap between the fields, with many computer scientists engaged in developing tools for scientific computation, and many scientists working on software problems of interest to computer scientists. Examples of the overlap include the development of standards for parallel processing such as the Message Passing Interface (MPI) and OpenMP, and development of parallel I/O filesystems such as Lustre.
Computational astrophysics has a long history, dating back to the numerical approximation of the motion of the moon and planets in order to compute ephemerides for tides and navigation. At first, the term computer referred to a person employed to calculate some quantity to some level of numerical accuracy. Electronic digital computers were introduced in World War II to break encryption codes and compute artillery range tables, however they were quickly adopted to other purposes after the war. In astrophysics, this included computing the first stellar structure models in the 1950s, and the development of numerical methods for fluid flow and shock-capturing at about the same time. Major advances in the decades that followed included the use of transistor instead of vacuum tubes, the development of compiled languages (Fortran in the 1950s, C in the 1970s), the adoption of the IEEE standards for the representation of floating point numbers and arithmetic (previously every manufacturer used their own pattern for bits) and the development of very large integrated circuits (VLSI), which allows the complexity of processor design that is enjoyed today. Modern trends in computer design are towards distributed memory parallel processors, with multiple cores capable of vector processing. At the same time as the development in hardware, progress in numerical analysis, computer science, and software engineering have resulted in standardized protocols for interprocess communication across networks, object-orientated languages such as Fortran95 and C++, interpreted languages such as Java and Python, visualization tools such as OpenGL, and software engineering tools such as CVS and Subversion (SVN).
- C. B. Laney (1998), Computational Gasdynamics, Cambridge University Press (ISBN: 0521570697)
- R. LeVeque (2002), Finite Volume Methods for Hyperbolic Problems, Cambridge University Press
- J. J. Monaghan (1992), .Ann. Rev. Astron and Astrophysics, Vol. 30 (A93-25826 09-90), p. 543-574
- Olaf Sporns (2007) Complexity. Scholarpedia, 2(10):1623.
- Gregoire Nicolis and Catherine Rouvas-Nicolis (2007) Complex systems. Scholarpedia, 2(11):1473.
- James Meiss (2007) Dynamical systems. Scholarpedia, 2(2):1629.
- Mark Aronoff (2007) Language. Scholarpedia, 2(5):3175.
- Søren Bertil F. Dorch (2007) Magnetohydrodynamics. Scholarpedia, 2(4):2295.
- Kendall E. Atkinson (2007) Numerical analysis. Scholarpedia, 2(8):3163.
- Philip Holmes and Eric T. Shea-Brown (2006) Stability. Scholarpedia, 1(10):1838.
Category:Astrophysics Category:Computational Astrophysics Category:Dynamical Systems