Monday, September 22, 2008

Tutorial 8: Generic programming, functional approach

I have decided to continue this series by extending the integrator to handle 2d and 3d. One of the annoyances an amateur game developer encounters is the multiplication of 3d vector classes. They all use the same data layout, all offer the same methods, all offer the same performance, yet there is at least one such class per library. For instance, in my go-kart racing game, I have to deal with 3d vectors in OpenAL, 3d vectors in CGAL (which I used for DeLaunay triangulation), and should I start using ODE (a physics library), I'll have to deal with yet another type of 3d vectors. To make things worse, I am myself guilty of having implemented my own 3d vector math library.

The type inference feature in F# makes it easy to avoid introducing yet another 3d vector class. When the obligation to explicitly annotate all your variables with types disappears, writing polymorphic code becomes easy.

What approaches do we have to write generic code?
- the object-oriented approach based on interfaces, classes and method overriding,
- what I would call the functional approach,
- the template approach (in C++ terminology) also known as the generics approach (Java, .Net and friends terminology), and
- an approach which I have never seen yet (in compiled languages), which I will call the meta-programming approach.

I will skip over the first method, mostly because it has already abundantly been described and explained.
The second method is the subject of this Tutorial, and is also described in chapter 5 of Expert F#
The third method will be explored in the next Tutorial, and I am keeping the fourth method, the most exciting, for the last of the tutorials dedicated to generic programming.
Normally I would have started by showing the code, but in this case I thought I needed to justify why I should change the code at all. Especially if you come from C++, you might think that running the integration in 2d or 3d would just be a matter of changing the kind of vectors passed to the "simulate" and "accel" functions. As long as the new vector types support "operator+" and "operator*" we should be fine.
You would be wrong. Consider:
let x = 0.5 * a

I am not sure of the reasons behind this design decision, but in F#, this means that "a" and "x" are inferred to be the same type as 0.5, which is float. Moreover, the multiplication operator is now constrained to be applicable only on floats. That's right, you are no longer allowed to write "2 * 3". Like it or not, that's the way it is.

In practice, this means that I am probably best off avoiding operators + and *. Instead, the caller of my functions will have to tell them what functions to use to perform arithmetic operations.

OK, enough talk, here comes the code.


open System

let run_time_func (func: unit -> 'a) =
let watch = new System.Diagnostics.Stopwatch()
let res = func ()
let diff0 = float watch.ElapsedMilliseconds / 1000.0
(diff0, res)

let map_parallel func items =
let tasks =
seq {
for i in items -> async {
return (func i)
Async.Run (Async.Parallel tasks)

type Vec1d = struct
val x: float

new (x) = { x = x }

static member plus (a: Vec1d) (b: Vec1d) = Vec1d(a.x + b.x)
static member scale (k: float) (a: Vec1d) = Vec1d(k * a.x)

type Vec2d = struct
val x: float
val y: float

new (x, y) = { x = x; y = y }

static member plus (a: Vec2d) (b: Vec2d) = Vec2d(a.x + b.x, a.y + b.y)
static member scale (k: float) (a: Vec2d) = Vec2d(k * a.x, k * a.y)

type Vec3d = struct
val x: float
val y: float
val z: float

new (x, y, z) = { x = x; y = y; z = z }

static member plus (a: Vec3d) (b: Vec3d) = Vec3d(a.x + b.x, a.y + b.y, a.z + b.z)
static member scale (k: float) (a: Vec3d) = Vec3d(k * a.x, k * a.y, k * a.z)

type VecOps<'vector, 'scalar> = class
val up: 'vector
val add: 'vector -> 'vector -> 'vector
val scale: 'scalar -> 'vector -> 'vector

new (up, add, scale) = { up = up; add = add; scale = scale; }

let gravity (vec_ops: VecOps<'a, 'b>) =
vec_ops.scale -9.81 vec_ops.up

let spring (vec_ops: VecOps<'a, 'b>) stiff pos =
vec_ops.scale stiff pos

let drag (vec_ops: VecOps<'a, 'b>) k speed =
vec_ops.scale (-k) speed

let euler_implicit (vec_ops: VecOps<'a, 'b>) accel_func pos speed delta =
let speed2 = vec_ops.add speed (vec_ops.scale delta (accel_func pos speed))
vec_ops.add pos (vec_ops.scale delta speed2), speed2

let adapt (f: 'num -> 'num -> 'num) pos speed =
let accel = f pos speed
speed, accel

let runge_kutta_4 (vec_ops: VecOps<'a, 'b>) deriv_func pos speed delta =
let extrapolate x delta dx =
vec_ops.add x (vec_ops.scale delta dx)

let half_delta = 0.5 * delta

let s1, a1 = deriv_func pos speed
let s2, a2 = deriv_func (extrapolate pos half_delta s1) (extrapolate speed half_delta a1)
let s3, a3 = deriv_func (extrapolate pos half_delta s2) (extrapolate speed half_delta a2)
let s4, a4 = deriv_func (extrapolate pos delta s3) (extrapolate speed delta a3)

let sixth_delta = delta / 6.0

let update old v1 v2 v3 v4 =
vec_ops.add old
(vec_ops.scale sixth_delta
(vec_ops.add v1
(vec_ops.add v2
(vec_ops.add v2
(vec_ops.add v3
(vec_ops.add v3

let pos1 = update pos s1 s2 s3 s4
let speed1 = update speed a1 a2 a3 a4
pos1, speed1

let simulate intg_func t0 t_fin delta pos0 speed0 =
let rec repeat t0 pos0 speed0 =
if t0 > t_fin then (pos0, speed0)
let t1 = t0 + delta
let pos1, speed1 = intg_func pos0 speed0 delta
repeat t1 pos1 speed1
repeat t0 pos0 speed0

let run euler_func s0 =
let t0 = 0.0
let t_fin = 1000.0
let delta = 0.005

let run_timed map_func =
let n_runs = 1
let times =
for i in 1..n_runs ->
let (run_time, res) = run_time_func (fun () -> s0 |> map_func(fun (x, y) -> simulate euler_func t0 t_fin delta x y))

let max = Seq.max times
let min = Seq.min times
let avg = (Seq.fold (fun x y -> x+y) 0.0 times) / float n_runs
(min, max, avg)

let single_min, single_max, single_avg = run_timed
let multi_min, multi_max, multi_avg = run_timed map_parallel

printfn "Single (min, max, avg): %f %f %f" single_min single_max single_avg
printfn "Multi (min, max, avg): %f %f %f" multi_min multi_max multi_avg

let Vec1D_ops = VecOps<Vec1d, float>(Vec1d(1.0),, Vec1d.scale)
let Vec2D_ops = VecOps<Vec2d, float>(Vec2d(0.0, 1.0),, Vec2d.scale)
let Vec3D_ops = VecOps<Vec3d, float>(Vec3d(0.0, 1.0, 0.0),, Vec3d.scale)

let accel (vec_ops: VecOps<'a, 'b>) (pos: 'a) (speed: 'a) =
vec_ops.add (spring vec_ops 100.0 pos)
(vec_ops.add (drag vec_ops 1.0 speed) (gravity vec_ops))

let n_bodies = 10

let s0_Vec1d = [ for s in 1..n_bodies -> (Vec1d(float s), Vec1d(0.0)) ]
let ef_Vec1d = euler_implicit Vec1D_ops (accel Vec1D_ops)
let rk4_Vec1d = runge_kutta_4 Vec1D_ops (adapt (accel Vec1D_ops))

let s0_Vec2d = [ for s in 1..n_bodies -> (Vec2d(float s, 1.0), Vec2d(0.0, 0.0)) ]
let ef_Vec2d = euler_implicit Vec2D_ops (accel Vec2D_ops)
let rk4_Vec2d = runge_kutta_4 Vec2D_ops (adapt (accel Vec2D_ops))

let s0_Vec3d = [ for s in 1..n_bodies -> (Vec3d(float s, 1.0, -1.0), Vec3d(0.0, 0.0, 0.0)) ]
let ef_Vec3d = euler_implicit Vec3D_ops (accel Vec3D_ops)
let rk4_Vec3d = runge_kutta_4 Vec3D_ops (adapt (accel Vec3D_ops))

printfn "1D (struct)"
run ef_Vec1d s0_Vec1d
run rk4_Vec1d s0_Vec1d

printfn "2D (struct)"
run ef_Vec2d s0_Vec2d
run rk4_Vec2d s0_Vec2d

printfn "3D (struct)"
run ef_Vec3d s0_Vec3d
run rk4_Vec3d s0_Vec3d


The code was somewhat cleaned up from previous tutorials, but no new concepts were added. I will never the less describe in further details my usage of structs.

type Vec2d = struct
val x: float
val y: float

new (x, y) = { x = x; y = y }

static member plus (a: Vec2d) (b: Vec2d) = Vec2d(a.x + b.x, a.y + b.y)
static member scale (k: float) (a: Vec2d) = Vec2d(k * a.x, k * a.y)

This defines a two-dimensional vector type supporting addition through the static method plus, and scalar product by a float through the static method scale. Nothing surprising there.

The next struct is more interesting:
type VecOps<'vector, 'scalar> = class
val up: 'vector
val add: 'vector -> 'vector -> 'vector
val scale: 'scalar -> 'vector -> 'vector

new (up, add, scale) = { up = up; add = add; scale = scale; }

Passing the addition and product methods to each function would be tedious, so I have defined a class to hold them. The first line defines a type called VecOps which is parametrized by two types, denoting respectively the vector type and the scalar type.
The second line is used to specify what is the upward direction. It's used only by function "gravity". A real integration library would normally not contain functions computing forces, since these are provided by the users. I know it's a bit clumsy to have this data member there.
The third and second lines define the type of the addition and scalar product operations.

It was a bit of a pain to add the extra parameter to each function manipulating vectors, but it's nothing unmanageable. This could have been averted by grouping all these functions into a class and passing the VecOps object to the constructor.
To avoid repeatedly passing VecOps objects to functions in the physics library, users can introduce short-cuts using function closures:
let ef_Vec1d = euler_implicit Vec1D_ops (accel Vec1D_ops)
let rk4_Vec1d = runge_kutta_4 Vec1D_ops (adapt (accel Vec1D_ops))

I'll close this up with a comment on performance. To put things bluntly, it's terrible. I am not quite sure if I have done something wrong, so I won't post scary numbers (yet), but I would say this method is not usable for real time physics simulations. I have un-inlined all functions, because (i) it makes little difference with respect to execution times, and (ii) the decisions to inline or not is best left to the .NET JIT.

There are however legitimate uses of the inline keyword, which will be shown in the next Tutorial.

No comments: