10 May 2016

Consider a population of rabbits and foxes. The number of rabbits $r$ and the number of foxes $f$ will range between 0 and 1, representing the percentage of some theoretical maximum population. Each generation, the number of rabbits and foxes changes according to a simple rule.

The number of rabbits in generation $n+1$, based on the number of rabbits $r_n$ and foxes $f_n$ in the previous generation $n$, is given by:

The constant $a$ is the rabbits’ birth rate. For example, if $a = 3$, then each rabbit produces 2 offspring in the next generation. The factor $(1 - r_n - f_n)$ accounts for deaths due to starvation and predation. If the number of rabbits is low, then few will die of starvation, if it’s high then many will; and likewise if the number of foxes is high, many rabbits will die from being eaten.

The number of foxes in the next generation is given by:

This says that the chance that a fox encounters and eats a rabbit is $r_n$. So if the rabbit population is at 80% of its theoretical maximum, 80% of foxes will eat enough to reproduce, and will produce $b$ offspring.

So let’s pick some values for $a$ and $b$ and see how the system behaves. We’ll visualize it by plotting the populations on a graph. But instead of plotting both populations over time, we’ll plot the populations against each other. That is, we’ll leave time out of it and just plot the set of points $(r_i, f_i)$ over, say, 40,000 generations.

Let’s see what we get.

For most values of $a$ and $b$, the system quickly finds a stable point. For $a = 2$ and $b = 3$, it converges in on $r = \frac{1}{3}$ and $f = \frac{1}{6}$. You can check that this is a fixed point of the recurrence.

Rabbits are on the $x$-axis, foxes are on the $y$-axis, and the origin is in the lower left-hand corner.

For other values of $a$ and $b$, the system converges to a loop instead of a point.

Keep in mind the system is not necessarily going from one point to the next around the loop over time. It’s actually jumping between points that all happen to be on the same loop. Weird, huh?

At another point in the parameter space, the loop takes on an irregular shape.

Nearby, the loop breaks up into a set of smaller, weird loops…

… each of which proceeds to get weirder.

Then, everything gets weird.

This is beautiful if you ask me.

This is chaos (still beautiful).

Below, you can explore the parameter space yourself. Your mouse position determines the values of $a$ and $b$. Hold down shift for fine-tuning. (On mobile, drag your finger around the canvas.)

If you’re careful with your mouse, you can find places where the top-level loop divides into smaller loops, which divide again into still smaller loops, presumably ad infinitum. In fact, it seems like you can make the smaller loops exhibit all the same weird behavior the top-level loop does. So there is definitely some self-similar recursive structure here.

As you move your mouse around, doesn’t it feel like you’re looking at 2D slices of some larger, crazy complicated 4D object? I thought so too.

Here is the 3D slice you get when you fix $a = 3.1$ (not recommended on mobile). Click and drag to rotate, scroll to zoom. You can also edit the url parameter to try different values for $a$.

A map of the territory

Playing around with this, it seems like there are regions where the system converges to several points, other regions where it’s a loop, and other regions where it’s more like a cloud. Trying to find “interesting” regions of the parameter space can feel like wandering around without a map. So, let’s make a map.

I’ll color each point $(a, b)$ according to how the system behaves with those parameter values. To do that I’ll use something called the Lyapunov exponent. This is a measure of how quickly two points $(r, f)$ and $(r', f')$, initially spaced very close together, diverge or converge after repeated iteration. It’s assumed that the distance between them will go like $e^{nL}$, where $n$ is the iteration number. If $L = 0$, they stay the same distance apart. If $L \lt 0$, they get closer together over time, and if $L \gt 0$, they diverge. Larger exponents mean they diverge (or converge) faster.

I chose shades of green for $L \lt 0$, yellow for $0 \leq L \lt 0.01$, red for $0.01 \leq L \lt 0.1$, and purple for $L \gt 0.1$. If the system diverged to infinity (that is, $f$ gets very large), I colored the point black.

The image below is the what I got for $a$ between 1 and 4.333 and $b$ between 1 and 6.

So, that’s a thing. Here it is as a 2400 x 2400 png. There is some definite fractal structure here. You can see this better in the higher resolution image, but the moiré pattern in the yellow/green region indicates long thin lines of alternating color. Between the big green triangles there are smaller triangles, and the bottom-left corners of these triangles extend all the way to the $L = 0$ boundary.

But anyway, laying this map underneath the interactive plot from above, you can see how different regions of the parameter space behave. (Works on mobile too.)

What is this?

I don’t know, but it’s closely related to the logistic map, which is a similar recurrence given by:

This would model a population of rabbits subject only to starvation (no foxes). It’s pretty crazy too:

This must be the logistic map’s crazy 4D cousin. In fact, the vertical bars on the bottom right of the map match up precisely the the bifurcation points of the logistic map. The first split is at $r = 3$ ($a = 3$ on my plot), the next is at 3.4, the next at 3.54, and then chaos takes over at 3.55, with breaks of order at 3.62, 3.72 and 3.81.

You can also find the ghost of the logistic map itself if you move your mouse around the small purple triangular region near the vertical bars around $a = 3.7, b = 1.67$. Spooky!

So what?

There really is no so what. It’s just astonishing to me how much complex behavior can arise from two pretty simple rules.