.

Steering Behaviors

October 2021

Imagine a world with bad steering dynamics. A beautiful place where people wake up falling off their bed, and spend their lives burning toast or accidentally pouring burning tea all over themselves every morning. To me, that's a lifestyle, but if you're into eating more so than wearing your beakfast, then you might be familiar with steering dynamics.

In this article I'm going to make an introduction about Steering Behaviors, which are basically methods for getting from a point A to a point B. In particle systems, we often use what we call autonomous agents which all sounds very fancy and secret agenty, but basically means that we have tiny circles that aren't happy with where they are and want to go somewhere else.

In the drawing below, we have a top view of a car turning to the right, the current velocity is pointing to the original direction, whereas the desired velocity is pointing towards the left turn. As our car was already in movement before it decided to turn, it needs to adapt it's current velocity to progressively match with the desired velocity, and thus calculate new velocities until it's new velocity aligns with the desired velocity. This is done by using a steering force (in red) to guide the current velocity towards the desired velocity. Which all makes more sense with a drawing of a car, because cars.

In code, if you know your desired location, then all you have to do is calculate the desired velocity and then the steering force like this :

let desired = target.position - particle.position; // calculate desired velocity
desired.setMagnitude(maxSpeed); // set vector length to the max speed
let force = desired - particle.velocity; // calculate steering force
force.setMagnitude(maxForce); // set vector length to the max steering force
applyForce(force);

They key formula that makes everything work is this :

Steering Force = Desired Velocity - Particle Velocity.

With it you can now do anything, and by anything I mean all you need to do is calculate the desired velocity for any type of steering behavior you wish and apply this formula. You probably noticed that I use two new variables called maxSpeed and maxForce, think of them as your particle's capabilities and limits. A car for instance is limited both in it's ability to go fast in a straight line, and it's ability to turn fast. Both of these correspond respectively to the vehicle's max speed and max steering force.

Now for the fun stuff, here is a list and implementation of some of the steering behaviors more or less, as designed originally by Craig Reynolds the original secret agent man.

Containment within walls

There's a lot of ways to handle particles when they hit edges, I for instance often use a wrap function to teleport particles from one side to another when they get out of range. It gives off a nokia 3310 snake 2 vibe if you see what I mean. But since this is about steering behaviors we're going to make a naive wall avoidance steering behavior.

To do this we are going to define the walls, and a range within these walls where, for horizontal walls we want to reverse the y-axis velocity of the particle at maximum speed, and for vertical walls, reverse its x-axis velocity at maximum speed too.

if(particle.posistion.x > width - wallRange){ // right wall
let desired = new Vector( -maxSpeed, particle.velocity.y );
}
else if(particle.position.x < wallRange){ // left wall
let desired = new Vector(  maxSpeed, particle.velocity.y );
}
// .. calculate steering and apply force

if(particle.position.y < wallRange){ // top wall
let desired = new Vector(  particle.velocity.x, maxSpeed );
}
else if(particle.position.y > height - wallRange){ // bottom wall
let desired = new Vector(  particle.velocity.x,  -maxSpeed );
}
// .. calculate steering and apply force

As this is a very pratical I'm going to combine it with all the following steering behaviors.

Seek and Flee

Seek and Flee are two opposite behaviors. Seek has the particle desiring to go towards a target at maximum speed, while Flee has the particle desiring to go away from the target at maximum speed. Both these behavior's desired velocities can be computed with a simple subtraction between the particle's position and the target's position.

In the simulation below, you can use your mouse to get in range and activate the seek behavior of the blue particles or the flee behavior of the red ones !

// Seek's Desired Velocity
let desired = target.position - particle.position;

// Flee's Desired Velocity
let desired = particle.position - target.position;

Pursue and Evade are somewhat very similar to Seek and Flee, in the sense that we're also talking about attraction and repulsion, but this time, the target's velocity is added to the target's position for a more accurate chase/escape. In Pursue, the pursuiting particle is attracted to the future position of it's target, and in Evade, the evading the particle is repulsed by the future position of it's target.

I drew a circle around one pursuing particle, and one evading particle. As they get in range of potential target, their behavior activates and a line is drawn between them and their target.

// Pursue's Desired Velocity
let futurePosition = target.position + target.velocity;
let desired = futurePosition - particle.position;

let futurePosition = target.position + target.velocity;
let desired = particle.position - futurePosition;

Arrival

Until now particles have always wanted to go at their desired location at their maximum speed. If particles were created in the image of man, then that would be fine. But for the sake of the arrival behavior, we're going to implement a speed slowing coeficient that gets closer to 0 as the particle approaches it's target.

In the simulation below, you can use your mouse to define a picnic area and see all the lovely red particles arrive and gradually deccelerate until they stop at your pointer.

let desired = target.position - particle.position;
let distance = desired.length(); // distance to target
desired.normalize();

if(distance < slowDownRange){ // if we are in the "slow down" range, slow down
// we need a coef that gets smaller proportionaly to the distance
let coef = ( distance / slowDownRange ) * maxSpeed;
desired *= coef; // set desired magnitude to coef
}
else{ // then we can go at max speed
desired *= maxSpeed; // set desired magnitude to maxSpeed
}

Wander

Have you ever wondered what NPCs (non-player characters) do in video games when you're pillaging your own fridge at 4am ? Maybe they're swipping other NPC's on tinder/grindr or maybe they're calling their mom to complain about the fact that they gave you a simple quest 4 months ago and that it still isn't done. They could be doing a lot of things actually, but most of the time they're just wandering.

Wander is a steering behavior simulating how living beings move around space when they have no purpose in life.

To implement the wander behavior I first define a clock in front of every particle. That clock is composed of a circle, and a handle. The handle's position will be incremented or decremented with random values at every step, thus, making that clock very innacurate for telling the time, but essential to the wander algorithm.

The way the wander behavior works is that, at every step, each particle looks at the clock's single handle, and sets it's desired location to be the tip of that handle.

let circleDistance = 30; // distance between particle and circle
let angleStep = 1; // size of random angle step increment

let circlePosition = particle.velocity.clone(); // copy velocity into circle
circlePosition.normalize();
circlePosition *= circleDistance; // set the circle position ahead of the particle

// now we need to have an angle set, and increment it with some random every step
angle += random() * angleStep - angleStep * 0.5;

let displacement = new Vector(0,-1); // initial direction
displacement *= circleDistance; // set direction magnitude
displacement.rotate(angle); // rotate direction according to the wander angle

let desired = circlePosition + displacement;

Further Steering Behaviors

I've so far implemented a good chunk of the list of simple behaviors described by Craig Reynolds here. The ones that are missing : Obstacle Avoidance, Wall Following, Path Following and Flow Field Following are all behaviors that deserve their own big article. So I guess I'll see you another time for the rest :) ! bye