Practical application of the Runge-Kutta method

When writing games or simulations, you’re more than likely going to get to a point where you need to compute the motion of an object over a discrete step in time. If you have a constant acceleration, such as gravity, and no other forces acting on your object, there’s no problem and you can just use the simple laws of motion you derived in high school physics:

position += velocity * dt + 0.5f * acc * dt * dt;

But in reality, this is rarely the case, and what you really have are forces that are changing over time. It’s also likely that those forces depend on the position or velocity of the object such as with spring forces, drag or friction forces, gravity on the scale of orbiting bodies, etc. In fact I’m writing this article as a result of having to brush up on the subject for a simulation involving electrostatic and magnetic forces on charged particles in nonuniform fields.

Since the equations for motion are ordinary differential equations (ODEs), we can apply a numerical integration method for approximating the motion of an object. Cutting to the chase, this typically comes down to using a Runge-Kutta method with order 4, which is so commonly used in games it’s simply referred to as RK4. If you’re like me, just knowing you’d like to use RK4 is half the battle. You can spend a whole bunch of time trying to read about it on Wikipedia and get a pretty good idea of how it works, but bridging the gap between theory and application can be the hardest part. My aim in this article is to show how to make that connection, and give a couple practical examples that can be easily applied to your application.

A friend assembled the code from this article to test it out for a simple harmonic spring to see how big of a difference it really makes. I took the output from the program which compares the harmonic motion formula, RK4, 2nd order motion, and Euler step, and graphed them so you can see how much it really matters.

Approximating the motion of several methods sampled 10x a secApproximating the motion of several methods sampled 60x a sec

The first one was only sampled 10x a second and the second graph shows data sampled 60 times a second, but only every 10th point is plotted. Feel free to download the source and play with the code yourself. You might not notice the black line that represents the actual motion because the RK4 step was so perfect it draws right over the top of it, even with the 1/10s step.

If you’re still not sure why you should be using a numeric integration method in the first place, I recommend reading Integration Basics from Glenn Fiedler. It’s a great intro and will get you started with something you can use right away.

The rest of this article is an extension to Glenn’s article, but it should be sufficient on it’s own to get a handle on the Runge-Kutta methods. I’ll show a generic version of the Runge-Kutta method with 4th order values, how to reduce it to the standard RK4 implementation (which you can also find in Glenn’s article), and then show how that generic version can be extended to other Runge-Kutta methods such as the RK4/5 Felberg method.

Applying the math to form a generic Runge-Kutta implementation

So for the first step, we can see on Wikipedia that the Runge-Kutta method applies to the class of problems that fit the form:

 \dot{y} = f(t, y)
-or-
y'(t) = f(t, y)

This is basically saying you should have a function that you can pass a value and get back the rate of change for that value. In terms of mechanics that’s like saying you should have a function that returns the velocity given a position and the time evolution of the system v = f(t, p). But, that’s not really a common case though for us, typically we’re interested in forces (which relates to acceleration F=ma), and we’ll have a function to determine the force based on position (often we don’t even need the time variable). This means we will need to translate our second order function into a first order function to meet this criteria. To do that, we can just clump position and velocity together, and consider that our y value. This leads us to define the following objects:

struct State
{
    Vec pos; // position
    Vec vel; // velocity
};

struct Derivative
{
    Vec dPos; // derivative of position: velocity
    Vec dVel; // derivative of velocity: acceleration
};

Also I’m going to add this helper function now so we can simplify the code later

State EulerStep(const State &initState, const Derivative &initDerivative, float dt)
{
    State outState;
    outState.pos = initState.pos + initDerivative.dPos * dt;
    outState.vel = initState.vel + initDerivative.dVel * dt;
    return outState;
}

I’ve made some minor changes here relative to Glenn’s state objects by replacing the floats with vectors. I’ll also be doing some (scalar * derivative) and (derivative + derivative) operations with these objects in order to do a weighted average, but you can assume those are just component wise operations.

Now we can satisfy  y'(t) = f(t, y) for our objects with a function like this:

Derivative SampleDerivative(float dt, const State &sampleState)
{
    Derivative output;
    output.dPos = sampleState.vel;
    output.dVel = ComputeForce(sampleState, dt) / mass;
    return output;
}

Now on to the algorithm

Basically the Runge-Kutta method works iteratively, each step reducing the error accounted for by changes in successively higher order derivatives.

  • An order 1 Runge-Kutta method turns out to be the Euler method, and assumes constant velocity for the step.
  • Order 2 Runge-Kutta method is accurate for constant acceleration
  • Order 3 Runge-Kutta method is accurate for constant jerk
  • and so on.

The following formula is used to compute the intermediate steps (K values) and each of them are dependent on the preceding steps weighted together. It’s possible to compute the weights for any number of steps desired but there are already several popular ones already computed, so I’ll just stick with those for this article.

Here’s what the iterative steps look like as defined on Wikipedia:

 k_1 = hf(t_n+{\color{red}0}h,  y_n), \, \\ k_2 = hf(t_n+{\color{red}c_2}h,  y_n+{\color{blue}a_{21}}k_1), \, \\ k_3 = hf(t_n+{\color{red}c_3}h,  y_n+{\color{blue}a_{31}}k_1+{\color{blue}a_{32}}k_2), \, \\ k_4 = hf(t_n+{\color{red}c_4}h,  y_n+{\color{blue}a_{41}}k_1+{\color{blue}a_{42}}k_2)+{\color{blue}a_{43}}k_3), \, \\ \vdots \\ k_s = hf(t_n+{\color{red}c_s}h,  y_n+{\color{blue}a_{s1}}k_1+{\color{blue}a_{s2}}k_2+\cdots+{\color{blue}a_{s,s-1}}k_{s-1}).

And the final step to combine these intermediate steps like this:

 y_{n+1} = y_n + {\color{magenta}b_1}k_1 + {\color{magenta}b_2}k_2 + {\color{magenta}b_3}k_3 + {\color{magenta}b_4}k_4 + \cdots + {\color{magenta}b_s}k_s

I’ve colorized all of the coefficients I was referring to which are precomputed for us. They’re usually represented like this, which is called a Butcher Tableau.

0
{\color{red}c_2} {\color{blue}a_{21}}
{\color{red}c_3} {\color{blue}a_{31}} {\color{blue}a_{32}}
\vdots \vdots \ddots
{\color{red}c_s} {\color{blue}a_{s1}} {\color{blue}a_{s2}} \cdots {\color{blue}a_{s,s-1}}
{\color{magenta}b_1} {\color{magenta}b_2} \cdots {\color{magenta}b_{s-1}} {\color{magenta}b_s}

h is our dt
{\color{red}c} is the row weight, far left column in the Butcher Tableau
{\color{blue}a} are the rest of the coefficients in the same row of the Butcher Tableau, corresponding to each of the previous k terms computed. (Note these sum to the row weight)
{\color{magenta}b} are the final weights used to sum the k values and correspond to the last row in the Butcher Tableau
t_n is the time at the beginning of the step
y_n is the initial state

Since we assume our data at the current time (before the step) is accurate, we just assume t_n = 0, and use dt for h. This assumes we don’t need to keep track of the absolute time value in your force function, and if anything a delta time is acceptable.

So the result of the function call gives us our derivatives, which are the rate of change values. But, you can see that all of the k values are multiplied by h. This in effect turns those rate of change derivatives into Euler stepped increments. Then when used in subsequent calculations are just weighted by the a coefficients before being added to the initial position.

I find it easier to understand if we refactor the h term like this:

 k_s = f(t_n+{\color{red}c_s}h,  y_n+{\bf h}({\color{blue}a_{s1}}k_1+{\color{blue}a_{s2}}k_2+\cdots+{\color{blue}a_{s,s-1}}k_{s-1})) \\ y_{n+1} = y_n + {\bf h}({\color{magenta}b_1}k_1 + {\color{magenta}b_2}k_2 + {\color{magenta}b_3}k_3 + {\color{magenta}b_4}k_4 + \cdots + {\color{magenta}b_s}k_s)

This way, the k values are just the derivatives from the previous step, and it’s clearer how the process works.
1) Get the derivative from the starting state
2) Use the derivative to do an Euler step and get a new state
3) Get the derivative from the new state approximation
4) Weight all of the previous derivatives together and go back to step 2

So finally here’s what the code for an RK4 function looks like in generic form, given example constants from an RK4 Butcher Tableau.

void RKIntegrate(State &state, float dt)
{
    //const float C1=0.0f;
    const float C2 = 0.5f, A21 = 0.5f;
    const float C3 = 0.5f, A31 = 0.0f, A32 = 0.5f;
    const float C4 = 1.0f, A41 = 0.0f, A42 = 0.0f, A43 = 1.0f;
    // ------------------------------------------------------------------------------------
    const float B1 = 1.0f/6.0f, B2 = 1.0f/3.0f, B3 = 1.0f/3.0f, B4 = 1.0f/6.0f;

    Derivative k1 = SampleDerivative(0.0f, state);
    Derivative k2 = SampleDerivative(C2*dt, EulerStep(state, A21*k1, dt));
    Derivative k3 = SampleDerivative(C3*dt, EulerStep(state, A31*k1 + A32*k2, dt));
    Derivative k4 = SampleDerivative(C4*dt, EulerStep(state, A41*k1 + A42*k2 + A43*k3, dt));
    // ...
    //Derivative kn = SampleDerivative(Cn*dt, EulerStep(state, An1*k1 + An2*k2 + ... + An_(n-1)*k_(n-1), dt));

    const Derivative derivativeSum = B1*k1 + B2*k2 + B3*k3 + B4*k4;

    // For Runge-Kutta on Wikipedia, this final step is done differently because
    // each Ki is stored as dt*derivative, instead of just the derivative as I have done.
    // What that means is that the final weighted average can just be added to the position.
    // The only difference is that I've saved the dt for last and now I must account for dt,
    // so I can use a normal Eulerstep and do y = y0 + derivativeSum*dt;
    return EulerStep(state, derivativeSum, dt);
}

The simplified RK4 implementation

In the generic example, I used the constants from the following Butcher Tableau:

{\color{red}0}
{\color{red}1/2} {\color{blue}1/2}
{\color{red}1/2} {\color{blue}0} {\color{blue}1/2}
{\color{red}1} {\color{blue}0} {\color{blue}0} {\color{blue}1}
{\color{magenta}1/6} {\color{magenta}1/3} {\color{magenta}1/3} {\color{magenta}1/6}

This is the most popular Butcher Tableau for RK4 and has a special property. It’s considered “consistent” because the a coefficients along the diagonal are the only weights for that row. {\color{red}c_2} = {\color{blue}a_{21}}, {\color{red}c_3} = {\color{blue}a_{32}}, {\color{red}c_4} = {\color{blue}a_{43}}, \cdots

This means that each K intermediate step only depends on the immediately preceding step. As a result, you can skip multiplying the a coefficient all together, and just pass the previous K term (derivative) in to your sampling function. And then inside the function, since we’re not using t0 + dt, the delta time passed in will be c_i * dt, which is the same as a_i * dt, so we can use this to do the EulerStep as part of the sampling function. Additionally since so many of the constants are 0, 1 or nice round values, we’ll just hard-code them into the formula.

Derivative SampleDerivativeRK4(float dt, const State &initialState, const Derivative &prevDerivative)
{
    // Can do this because dt here is scaled by the correct weight
    State projectedState = EulerStep(initialState, prevDerivative, dt);

    Derivative output;
    output.dPos = projectedState.vel;
    output.dVel = ComputeForce(projectedState, t) / mass;
    return output;
}

// RK4 simplified
State RK4Integrate(const State &state, float dt)
{
    Derivative whatever; // doesn't matter what the derivative is here, since it's advanced by a dt of 0.

    Derivative k1 = SampleDerivativeRK4(0.0f, state, whatever);	
    Derivative k2 = SampleDerivativeRK4(0.5f*dt, state, k1);
    Derivative k3 = SampleDerivativeRK4(0.5f*dt, state, k2);
    Derivative k4 = SampleDerivativeRK4(dt, state, k3);

    return EulerStep(state, 1.0f/6.0f * (k1 + 2*(k2 + k3) + k4), dt);
}

So there you have it. From the generic implementation you can see how it’s possible to get to the simplified RK4 implementation just like in Glenn’s article, and other derivations should now be easy to get as well.

Going beyond the 4th order to RK4/5 with error approximation

We can plug in any Butcher Tableau constants to the generic implementation and get other Runge-Kutta methods, such as the RK4/5 Fehlberg version. The Fehlberg version will give us a 4th order and 5th order accurate version, with a single set of K variables. That’s handy because we can then compare the two answers and get an approximate error between the two versions.

One application of this is to use adaptive time steps to retry using a smaller dt if the error is greater than a predefined threshold. Likewise if the error is much less than the threshold, we can increase the dt to save on unnecessary computation. This allows you to pretty much guarantee a certain level of accuracy, without sacrificing computation where you don’t need it. That sure beats having to do the minimum step size every time to guarantee a certain accuracy. It’s also much better than having to do 1 RK step, and 2 half RK steps to get an error threshold because it’s a lot less computation.

Here’s the Butcher Tableau and implementation for the RK-Fehlberg method:

{\color{red}0}
{\color{red}1/4} {\color{blue}1/4}
{\color{red}3/8} {\color{blue}3/32} {\color{blue}9/32}
{\color{red}12/13} {\color{blue}1932/2197} {\color{blue}-7200/2197} {\color{blue}7296/2197}
{\color{red}1} {\color{blue}439/216} {\color{blue}-8} {\color{blue}3680/513} {\color{blue}-845/4104}
{\color{red}1/2} {\color{blue}-8/27} {\color{blue}2} {\color{blue}-3544/2565} {\color{blue}1859/4104} {\color{blue}-11/40}
{\color{magenta}16/135} {\color{magenta}0} {\color{magenta}6656/12825} {\color{magenta}28561/56430} {\color{magenta}-9/50} {\color{magenta}2/55}
{\color{magenta}25/216} {\color{magenta}0} {\color{magenta}1408/2565} {\color{magenta}2197/4104} {\color{magenta}-1/5} {\color{magenta}0}
State RK45Integrate(State &state, float dt, Vec3 &errorOut)
{
	//const float C1=0.0f;
	const float C2 = 0.25f, A21 = 0.25f;
	const float C3 = 3.0f/8.0f, A31 = 3.0f/32.0f, A32 = 9.0/32.0f;
	const float C4 = 12.0f/13.0f, A41 = 1932.0f/2197.0f, A42 = -7200.0f/2197.0f, A43 = 7296.0f/2197.0f;
	const float C5 = 1.0f, A51 = 439.0f/216.0f, A52 = -8.0f, A53 = 3680.0f/513.0f, A54 = -845.0f/4104.0f;
	const float C6 = 0.5f, A61 = -8.0f/27.0f, A62 = 2.0f, A63 = -3544.0f/2565.0f, A64 = 1859.0f/4104.0f, A65 = -11.0f/40.0f;
	// ----------------------------------------------------------------------------------------------------------------------------------------------------
	const float B4_1 = 25.0f/216.0f, B4_2 = 0.0f, B4_3 = 1408.0f/2565.0f, B4_4 = 2197.0f/4104.0f, B4_5 = -1.0f/5.0f, B4_6 = 0.0f;
	const float B5_1 = 16.0f/135.0f, B5_2 = 0.0f, B5_3 = 6656.0f/12825.0f,B5_4 = 28561.0f/56430.0f,B5_5 = -9.0f/50.0f, B5_6 = 2.0f/55.0f;

	Derivative k1 = SampleDerivative(0.0 , state);
	Derivative k2 = SampleDerivative(C2*dt, EulerStep(state, A21*k1, dt));
	Derivative k3 = SampleDerivative(C3*dt, EulerStep(state, A31*k1 + A32*k2, dt));
	Derivative k4 = SampleDerivative(C4*dt, EulerStep(state, A41*k1 + A42*k2 + A43*k3, dt));
	Derivative k5 = SampleDerivative(C5*dt, EulerStep(state, A51*k1 + A52*k2 + A53*k3 + A54*k4, dt));
	Derivative k6 = SampleDerivative(C6*dt, EulerStep(state, A61*k1 + A62*k2 + A63*k3 + A64*k4 + A65*k5, dt));

	// ...
	//Derivative kn = SampleDerivative(Cn*dt, EulerStep(state, An1*k1 + An2*k2 + ... + An_(n-1)*k_(n-1), dt));

	const Derivative deltaSum4 = B4_1*k1 + B4_2*k2+ B4_3*k3 + B4_4*k4 + B4_5*k5 + B4_6*k6;
	const Derivative deltaSum5 = B5_1*k1 + B5_2*k2 + B5_3*k3 + B5_4*k4 + B5_5*k5 + B5_6*k6;

	// For Runge-Kutta on Wikipedia, this final step is done differently because each Ki is stored as dt*derivative, instead of just the derivative as I have done.
	// What that means for the math version is that the final weighted average can just be added to the position. The only difference is that I've saved the dt for last
	// and now I must account for dt, so I can use a normal Eulerstep and do y = y0 + derivative*dt;
	State updatedOrder5 = EulerStep(state, deltaSum5, dt);
	State updatedOrder4 = EulerStep(state, deltaSum4, dt);

	errorOut = updatedOrder5.pos - updatedOrder4.pos;
	return updatedOrder4;
}

Download the source code used in this article

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>