## odeRK(func, t0, t1, dt, x0, butcher, adapt_set, unit_hint)

Calculates the solution of an ODE $\dot x(t) = f(t,x)$ for a given Butcher parametrisation.

 func The function $f$ defining the ODE $\dot x(t) = f(t,x).$ t0 The start of the evaluation interval. t1 The end of the evaluation interval. dt Intitial iteration step (might be adjusted later, c.f. above). x0 Initial state. butcher The butcher tables to parametrise the Runge-Kutta method. adapt_set Setting of the step adaptive steps. unit_hint The method tries to guess all the right quantities from the parameters, but zero might trick it. You can give here the right units [t_units, [x0_unit, x1_unit, ...]] return value Either the value of the solution in format [t0, x00, x10, ...; t1, x01, x11, ...], or the coefficents for the interpolation when interpolation is calculated.

Our example in this documentation will be the autmonomous $\ddot x=x$.

Fix step iteration.

In this case, the iteration goes through with the predefined t-step $dt$ between $t_0$ and $t_1$. The stucture 'butcher' requires only the components 'matrix', 'steps', and 'coeff'. They are the coefficients acting on the previous function evaluations to yield new x for the new evaluation, the time steps for the new evaluations, and the final coefficients for the eventual step respectively. See the return values of 'odeRungeKutta3()' and 'odeRungeKutta4()' as examples.

 soln = odeRK(@(t,x) [x,-x/1s^2], 0, 5s, .01s, [1e-21m, 1m/s], odeRungeKutta4()); soln.size() plot(soln[...;0]/1s, soln[...;1]/1m) 

Often times one needs an error control on the t-steps. There are embedded RK methods that estimate errors based on a different-order method derived from the same function evaluation matrix. The parameter 'butcher' will require the corresponding final coefficents in the component 'corr', and also a component 'corr_fn' which returns the error estimation from the last two (hence allowing simple control system methods) difference vectors of the two different-order iterations. See the return values of 'odeDormandPrince54()' and 'odeDormandPrince853()' as examples.

To enable the adaptive version, one needs to define the parameter 'adapt_set'. It takes the following components:

• rel_eps - Relative error tolerance.
• abs_eps - Absolute error tolerance.
• min_scale - A step might scale the next one down by only at most this factor.
• max_scale - A step might scale the next one up by only at most this factor.
• min_step - Minimal step size.
• max_step - Maximal step size.
All of these take default values, hence an empty structure '{}' will switch the method on.
 soln = odeRK(@(t,x) [x,-x/1s^2], 0, 5s, .01s, [1e-21m, 1m/s], odeDormandPrince853(), {}); soln.size() plot(soln[...;0]/1s, soln[...;1]/1m) 
The clear advantage of high-order adaptive methods are that they can make huge steps, wherever it is possible. Sometimes one needs the values inbeetween the iteration steps. Standard interpolation techniques apply here too. The stucture 'butcher' will need additional fields 'interp', 'ip_coeff', 'ip_steps'. They will correspond to the 'matrix', 'coeff', and 'steps' of the base method. The first four coefficients of the interpolation will be calculated from the iteration step, and the additional coefficients will be the evaluations of 'ip_coeff' for the Horner like polynomial evaluation: $$p(t) = c_0 + t(c_1 + (1-t)*(c_2 + t*(c_3 + \dots))),$$ where $t$ is the fraction of the interpolation step. This evaluation is performed by the function 'odeRKEvalInterp()'. The components 'interp' and 'ip_steps' allow additional function evaluations if needed be.
 f = @(t,x) [x,-x/1s^2]; t0 = 0; t1 = 5s; dt = .01s; x0 = [1e-21m, 1m/s]; B = ejs.odeDormandPrince853(); print("Comparing raw and interpolated results").title() sparse = ejs.odeRK(f, t0, t1, dt, x0, B, {}); interp = ejs.odeRK(f, t0, t1, dt, x0, B, {interp:true}); j0 = 21; j1 = 29; t = sparse[j0;0]..1ms..sparse[j1;0]; x = (@(_) ejs.odeRKEvalInterp(interp,_))(#t); plot(sparse[j0..j1;0]/1s,sparse[j0..j1;1]/1m).curve(t/1s,x/1m) print("Comparing their errors").title() z = sin(t/1s); p = splineInterp(sparse[...;0..1], []); y = (@(_) splineEval(p,_))(#t); plot(t, y/1m-z).curve(t, x/1m-z) print("Comparing global spline interpolation error to RK interpolation").title() s = splineInterp(sparse[...;0..1], [(@(s) splineEval(splineDiff(s),t0)), f(t0, sparse[0;1..2]); (@(s) splineEval(splineDiff(s),t1)), f(t1, sparse[-1;1..2])]); v = (@(_) splineEval(s,_))(#t); plot(t, v/1m-z).curve(t, x/1m-z) print("Comparing local spline interpolation error to RK interpolation").title() j = 26; t = sparse[j;0]..0.5ms..sparse[j+1;0]; z = sin(t/1s); x = (@(_) ejs.odeRKEvalInterp(interp,_))(#t); v = (@(_) splineEval(s,_))(#t); s = splineInterp(sparse[j-1..j+2; 0..1], [(@(s) splineEval(splineDiff(s), sparse[j;0] )), f( sparse[j;0] , sparse[ j; 1..2]); (@(s) splineEval(splineDiff(s),sparse[j+1;0])), f(sparse[j+1;0], sparse[j+1;1..2])]); w = (@(_) splineEval(s,_))(#t); plot(t, w/1m-z).curve(t, x/1m-z).curve(t, v/1m-z)