Runge-Kutta methods
For solving ordinary differential equations, the classical Runge-Kutta method of 4th order has been well established. But there is actually not only one Runge-Kutta method, but an entire class of methods. Depending on the problem to solve, different methods of this family are favorable. Here is a short overview...
Update: corrected link.
Update: Changed formatting.
Update: corrected link.
Update: Changed formatting.
Many numeric problems can be reduced down to the problem of solving an ordinary differential equation of the form
the simplest way to numerically solve this equation is to use the Euler method, which is quite intuitive
where
In order to counter this, one can introduce refinements by including 'intermediate' values like
In general, there are a lot of possibilities to come up with methods like these called Runge-Kutta methods . They all take the form
where
with
Depending on whether in the equations
Here is a short document summarizing the essence of how to apply Runge-Kutta methods and the most common sets of coefficients - the (short) text being in German, the coefficient sets are given as Butcher tableaus, which should be readable whether you understand German or not.
dy / dx = f(x ,y)
the simplest way to numerically solve this equation is to use the Euler method, which is quite intuitive
Y(n + 1) = Y(n) + Dx * dy / dx (x(n)) = Y(n) + Dx * f(x(n), Y(n))
where
Dx
is the step size which determines the accuracy of the solution. The resulting pairs ({x(0), Y(0)}, ..., {x(n), Y(n))}
is the discrete numeric approximation of the actual solution ({x(0), y(x(0))}, ..., {x(n), y(x(0))})
. Dx
determines the distance of two points of this discrete approximation of the solution. However, this is only a very crude manner of approximation, and many problems unfold when using it. It is not at all acurate, easily becomes unstable and thus tends to create wrong solutions or at least high deviations from the actual solution. In order to counter this, one can introduce refinements by including 'intermediate' values like
Y(n + 1) = Y(n) + Dx * dy / dx (x(n) + 1/2 Dx, Y(n) + 1/2 Dx dy / dx (x(n), Y(n)) ) = Y(n) + Dx f(x(n) + 1/2 Dx, Y(n) + 1/2 Dx f( x(n), Y(n)) )
In general, there are a lot of possibilities to come up with methods like these called Runge-Kutta methods . They all take the form
Y(n + 1) = Y(n) + b(1) k(1) + b(2) k(2) + ...
where
(*) k(i) = Dx * f ( x(n) + c(i) * Dx, y(n) + a(i, 1) * k(1) + a(i, 2) + k(2) ...)
with
a(i,j) , b(i), c(i)
being coefficients with appropiately chosen constant values. Depending on whether in the equations
(*)
for the k(i)
the right hand side is independent of the K(i)
or dependent of k(i)
, the method is explicit or implicit. Explicit methods are far more easy to apply, whereas implicit methods are harder to implement (you need to find the root of equations numerically). Implicit methods are in general much more stable and thus yield by far better results than explicit ones. Here is a short document summarizing the essence of how to apply Runge-Kutta methods and the most common sets of coefficients - the (short) text being in German, the coefficient sets are given as Butcher tableaus, which should be readable whether you understand German or not.