Conway's Game of Life encoded in video predictive frames

I've always wanted to do something involving glitch art and conway's game of life.

The generative, self-sustained, independant aspects, the finding of patterns all speak to me on the same levels. But I never found a grid-like system that could be abused interestingly. That is until last year at glitch art festival /fu:bar/ 2018 in Zagreb, I attended Ramiro Polla's workshops about ffglitch a fork of ffmpeg designed to extract, modify and reinject motion vectors into videos compressed in mpeg2.

Last month we set up residency with Thomas Collet in Paris and reworked the ffglitch facilitating scripts developed by Jo Fragment. The script we modded is the randomyzer.py which runs through a bi-dimensional array of vectors to change their values.

The motion vectors of every frame are stored in a table of pairs (fwd_mv), where mv[0] is its x-axis value and mv[1] is its y-axis value.

So if you wanted to invert the x and y values of each movement vector you'd do this :

```
for row in fwd_mv:
for mv in row:
mv[0] = mv[1]
mv[1] = mv[0]
return fwd_mv
```

Its also fun to swipe things from one side to another. Since the outside of the video is undefined, applying motion from outside of the coordinates will generate your encoder's default null color. Here's what happens when you set all the vectors to (-15,0).

```
for row in fwd_mv:
for mv in row:
mv[0] = -15
mv[1] = 0
return fwd_mv
```

I'm not going to over all the possibilities in this post but even with the simplest forms of action you get widely interesting results. Here's a last example of what happens when you give the same value to x and y.

```
for row in fwd_mv:
for mv in row:
mv[0] = mv[1]
mv[1] = mv[1]
return fwd_mv
```

To make things a little more interesting we decided we needed a few more variables to play with than the motion vector's x and y values so we defined a whole bunch of things :

```
def glitch_frame(frame, number_of_frames, last_frame):
try:
fwd_mv = frame["forward"]
except KeyError:
# No forward motion vector found. Probably an I or B frame.
return last_frame #l
# FUNKY VARIABLES
#################
l = last_frame # array of last frame
h = len(fwd_mv) # height of frame
w = len(fwd_mv[1]) # width of frame
h_center = h // 2
w_center = w // 2
n = int(number_of_frames)
i = 0 # iteration purpose
j = 0 # iteration purpose
for row in fwd_mv:
j = 0
for mv in row:
mv[0] = mv[0] # make changes here
mv[1] = mv[1] # and here !
j = j + 1
i = i + 1
return fwd_mv
```

And naturally it almost feels like some kind of GLSL script format now. Making it really easy to set up functions with geometrical control of the plane. For instance if we only wanted to glitch a circle in the center of the video we'd use the circle's cartesian equation just like shader coders do.

```
for row in fwd_mv:
j = 0
for mv in row:
k = 14 # circle radius
if (j - w_center)*(j - w_center) + (i - h_center)*(i - h_center) - (k*k) < 0 :
mv[1]= -10
mv[0]= 0
j = j + 1
i = i + 1
return fwd_mv
```

We can also explore trigonometry functions to give the glitch a more organic and wobbly feel. Here's an example when applying sine waves everywhere (look closely and you'll notice the waves)

```
for row in fwd_mv:
j = 0
for mv in row:
mv[0] = math.sin(i)*mv[0]
mv[1] = math.cos(j)*mv[1]
j = j + 1
i = i + 1
return fwd_mv
```

So if simple operations are possible it should be easy to encode the game of life right ?

Seeing as our grid has a little more complexity than the basic GoL (the bi-dimensional qualities of each cell) it's also in our hands to make the most of it by making our own front-end rules.

Here's a first version where :

- each dead cell becomes a null vector

- each alive cell takes its value from it's next frame self.

The result wasn't as impressive as we'd expect though, Too much of the original motion vectors were being re-injected and what we were left with were macroblocks stacking in a GoL pattern. This of course was the idea to begin with, but since we had extra variables to play with, why not try other things ?

In a following version we defined that :

- each dead cell becomes a null vector

- each alive cell takes its values from the average values of surrounding cells (null excluded)

I really loved the results we got, but this somewhat still felt a little lacking of original content, like the video was loosing its intention completely to the algorithm which wasn't really what we were looking for. The movement would get out of hand and not enough of the original video motion would be retained. Especially on longer videos.

To balance this I made a mix of both of these rules where :

- each dead cell becomes a null vector

- each alive cell that wasn't a null vector takes its value from it's next frame self

- each alive cell that was a null vector is initiated with the average values of surrounding cells (null excluded)

```
count = 0
if 0 < i < ( h - 1 ) and 0 < j < ( w - 1 ) :
if(l[i + 1][j - 1][0] >= 1 or l[i + 1][j - 1][0] <= -1) and (l[i + 1][j - 1][1] >= 1 or l[i + 1][j - 1][1] <= -1) : count = count + 1
if(l[i + 1][j + 0][0] >= 1 or l[i + 1][j + 0][0] <= -1) and (l[i + 1][j + 0][1] >= 1 or l[i + 1][j + 0][1] <= -1) : count = count + 1
if(l[i + 1][j + 1][0] >= 1 or l[i + 1][j + 1][0] <= -1) and (l[i + 1][j + 1][1] >= 1 or l[i + 1][j + 1][1] <= -1) : count = count + 1
if(l[i + 0][j - 1][0] >= 1 or l[i + 0][j - 1][0] <= -1) and (l[i + 0][j - 1][1] >= 1 or l[i + 0][j - 1][1] <= -1) : count = count + 1
if(l[i + 0][j + 1][0] >= 1 or l[i + 0][j + 1][0] <= -1) and (l[i + 0][j + 1][1] >= 1 or l[i + 0][j + 1][1] <= -1) : count = count + 1
if(l[i - 1][j - 1][0] >= 1 or l[i - 1][j - 1][0] <= -1) and (l[i - 1][j - 1][1] >= 1 or l[i - 1][j - 1][1] <= -1) : count = count + 1
if(l[i - 1][j + 0][0] >= 1 or l[i - 1][j + 0][0] <= -1) and (l[i - 1][j + 0][1] >= 1 or l[i - 1][j + 0][1] <= -1) : count = count + 1
if(l[i - 1][j + 1][0] >= 1 or l[i - 1][j + 1][0] <= -1) and (l[i - 1][j + 1][1] >= 1 or l[i - 1][j + 1][1] <= -1) : count = count + 1
if (count < 2) or (count > 3):
mv[0] = 0
mv[1] = 0
else:
if 0 < i < ( h - 1) and 0 < j < ( w - 1 ):
if mv[0] == 0 :
mv[0] = math.ceil((l[i+1][j-1][0] + l[i+1][j][0] + l[i+1][j+1][0] + l[i][j-1][0] + l[i][j+1][0] + l[i-1][j-1][0] + l[i-1][j][0] + l[i-1][j+1][0]) / 2.8)
else :
mv[0] = mv[0]
if mv[1] == 0 :
mv[1] = math.ceil((l[i+1][j-1][1] + l[i+1][j][1] + l[i+1][j+1][1] + l[i][j-1][1] + l[i][j+1][1] + l[i-1][j-1][1] + l[i-1][j][1] + l[i-1][j+1][1]) / 2.8)
else:
mv[1] = mv[1]
else :
mv[0] = 0
mv[1] = 0
```

I really love this. It captures the movement, follows it and reinjects it all while following a GoL pattern. It feels alive, which is often one of the feelings I get when I'm experiencing a glitch. Seeing all these generated lifeforms inside a video format reminds me a lot about how we often think of ourselves as very singular, yet are inhabited by so many lifeforms. In a way, the genetically us part of us exists less as an organism in its own right, and more as a framework designed to support a particular symbiotic community.

On est un accident biologique qui fait ce qu'il peut.

- Jacques Brel