import numpy as np
import matplotlib.pyplot as plt
from ipywidgets import interact, interactive
import matplotlib.animation as animation
from IPython.display import HTML
from functools import partial
import matplotlib.style as style; style.use('seaborn-v0_8')
'figure.figsize'] = (7.8, 2.5); plt.rcParams['figure.dpi'] = 300
plt.rcParams['axes.facecolor'] = 'white'; plt.rcParams['grid.color'] = 'gray';
plt.rcParams['grid.linewidth'] = 0.25; plt.rcParams[
9 Behavioral agency
Wolfram Barfuss | University of Bonn | 2024/2025
▶ Complex Systems Modeling of Human-Environment Interactions
9.1 Motivation | Agent-based modeling of complex systems
The distinctive feature of the science of complex systems is the fascination that arises when the whole becomes greater than the sum of its parts (Figure 9.1) and properties on the macro-level emerge that do not exist on the micro-level.

Complex systems thinking is instrumental in understanding the interactions between society and nature (Figure 9.2).

Agent-based models capture the features of a complex system in the most direct way, in that they model the behavior of individual agents and their interactions.
Learning goals
After this lecture, students will be able to:
- Explain the history and rationale of agent-based modeling and generative social science
- Explain the advantages and challenges of agent-based modeling
- Simulate two famous agent-based models in Python
- Implement animations in Python
- Write object-oriented Python programs
9.3 Example | Conway’s Game of Life
The Game of Life is a cellular automaton devised by the British mathematician John Horton Conway in 1970.
A cellular automaton is a discrete model of computation consisting of a regular grid of cells, each in one of a finite number of states (e.g., on and off).
The game of life is a very influential model in the field of complex systems (although Conway wasn’t particularly proud of it)
See Inventing Game of Life (John Conway) - Numberphile for a brief backstory on the game of life.
Questions
The game of life is a comparably simple model to answer two very fundamental questions:
- How can something reproduce itself?
- How can a complex structure (like the mind) emerge from a basic set of rules?
States
The cells of the cellular automaton can be in one of two states: dead or alive.
We can represent the state of a cell with a binary variable: 1 (black) for alive and 0 (white) for dead. The state of the whole system can the be represented as follows:
= 40, 100 # define the size of the grid
ROWS, COLS = np.random.choice([0, 1], size=(ROWS, COLS), p=[0.7, 0.3]) #generate random states
grid ='binary', interpolation='none') # plot the grid
plt.imshow(grid, cmap; plt.gca().set_yticks([]); # remove x and y ticks plt.gca().set_xticks([])
Dynamics
The dynamics of the game of life are governed by the following rules:
- Living cells with fewer than two living neighbors die
- Living cells with more than three living neighbors die
- Dead cells with exactly three neighbors become alive
We will use the matplotlib.animation.FuncAnimation
function to animate the game of life in Python. To do so, we need to implement the game’s rules in a function that updates the grid.
It must receive the number of the current frame
of the animation plus possible further function arguments. We supply it with an image
argument variable representing the grid’s image. It must return an iterable of artists, which the matplotlib.animation.FuncAnimation
will use to update the plot.
# Function to update the grid based on the Game of Life rules
def update_grid(frame, image):
global grid # required to access the variable inside the function
= grid.copy()
new_grid
for i in range(0, ROWS):
for j in range(0, COLS):
= ( # the % sign is a modulo division, i.e., 13 % 13 = 0
neighbors_sum - 1) % ROWS, (j - 1) % COLS] + grid[(i - 1) % ROWS, j] +
grid[(i - 1) % ROWS, (j + 1) % COLS] + grid[i, (j - 1) % COLS] +
grid[(i + 1) % COLS] + grid[(i + 1) % ROWS, (j - 1) % COLS] +
grid[i, (j + 1) % ROWS, j] + grid[(i + 1) % ROWS, (j + 1) % COLS])
grid[(i
# the rules of the game
if grid[i, j] == 1 and (neighbors_sum < 2 or neighbors_sum > 3):
= 0
new_grid[i, j] elif grid[i, j] == 0 and neighbors_sum == 3:
= 1
new_grid[i, j]
= new_grid
grid
image.set_array(grid)return image, # must return an iterable of artists
With the update_grid
function in place, we can now create the animation using the FuncAnimation
function. The %%caputure
magic command is used to suppress the output of the cell below, as we will call the animation separately.
%%capture
# Set up the Matplotlib figure and axis
= plt.subplots(figsize=(16,9))
fig, ax = ax.imshow(grid, cmap='binary', interpolation='none')
im ; ax.set_yticks([])
ax.set_xticks([])
# Create animation
= animation.FuncAnimation(fig, partial(update_grid, image=im),
ani =150, interval=150) frames
Finally, we can display the animation using the HTML
function from the IPython.display
module.
# Display the animation using HTML
# HTML(ani.to_jshtml())
Emerging structures
Despite the simplicity of the rules, complex structures of species moving and reproducing can emerge from these rules, despite them not having any concept of movement or reproduction.
See, for example, the Epic Conway’s Game of Life or Life in life videos.
Impact
Although the rules are incredibly simple, it is impossible to say whether a given configuration persists or eventually dies out. There are fundamental limits to prediction.
It was shown that you can do any form of computation (that you can do on a regular computer) with the game of life.
Complex behavior does not require complicated rules. Complex behavior can emerge from simple rules. This realization has been a key insight of complexity sciences and has shaped the way complexity science is done today.
9.4 Example | Schelling’s segregation model
The second example model studies the phenomenon of racially segregated neighborhoods. The content here is heavily inspired by QuantEcon’s Quantitative Economics with Python.
Questions
We observe racially segregated neighborhoods.
Does that mean that all residents are racists?
Context
In 1969, Thomas C. Schelling developed a simple but striking model of racial segregation.
His model studies the dynamics of racially mixed neighborhoods.
Like much of Schelling’s work, the model shows how local interactions can lead to surprising aggregate structure.
In particular, it shows that relatively mild preference for neighbors of similar race can lead in aggregate to the collapse of mixed neighborhoods, and high levels of segregation.
In recognition of this and other research, Schelling was awarded the 2005 Nobel Prize in Economic Sciences (joint with Robert Aumann).
The Model
We will cover a variation of Schelling’s model that is easy to program and captures the main idea.
Set-Up
Suppose we have two types of people: orange people and green people.
For the purpose of this lecture, we will assume there are 250 of each type.
These agents all live on a single-unit square.
The location of an agent is just a point \((x, y)\), where \(0 < x, y < 1\).
Preferences
We will say that an agent is happy if half or more of her 10 nearest neighbors are of the same type.
Here ‘nearest’ is in terms of Euclidean distance.
An agent who is not happy is called unhappy.
An important point here is that agents are not averse to living in mixed areas.
They are perfectly happy if half their neighbors are of the other color.
Behavior
Initially, agents are mixed together (integrated).
In particular, the initial location of each agent is an independent draw from a bivariate uniform distribution on \(S = (0, 1)^2\).
Now, cycling through the set of all agents, each agent is now given the chance to stay or move.
We assume that each agent will stay put if they are happy and move if unhappy.
The algorithm for moving is as follows
- Draw a random location in \(S\)
- If happy at the new location, move there
- Else, go to step 1
In this way, we cycle continuously through the agents, moving as required.
We continue to cycle until no one wishes to move.
Implementation
We use object-oriented programming (OOP) to model agents as objects.
OOP is a programming paradigm based on the concept of objects, which can contain data and code: - data in the form of fields (often known as attributes or properties), and - code in the form of procedures (often known as methods).
Agent class
A class defines how an object will work. Typically, the class will define several methods that operate on instances of the class. A key method is the __init__
method, which is called when an object is created.
class Agent:
# The init method is called when the object is created.
def __init__(self, type, num_neighbors, require_same_type):
self.type = type
self.draw_location()
self.num_neighbors = num_neighbors
self.require_same_type = require_same_type
def draw_location(self):
self.location = np.random.uniform(0, 1), np.random.uniform(0, 1)
def get_distance(self, other):
"Computes the Euclidean distance between self and another agent."
= (self.location[0] - other.location[0])**2
a = (self.location[1] - other.location[1])**2
b return np.sqrt(a + b)
def number_same_type(self, agents):
"Number of neighbors of same type."
= []
distances # distances is a list of pairs (d, agent), where d is the distance from
# agent to self
for agent in agents:
if self != agent:
= self.get_distance(agent)
distance
distances.append((distance, agent))# == Sort from smallest to largest, according to distance == #
distances.sort()# == Extract the neighboring agents == #
= [agent for d, agent in distances[:self.num_neighbors]]
neighbors # == Count how many neighbors have the same type as self == #
return sum(self.type == agent.type for agent in neighbors)
def happy(self, agents):
"True if a sufficient number of nearest neighbors are of the same type."
= self.number_same_type(agents)
num_same_type return num_same_type >= self.require_same_type
def update(self, agents):
"If not happy, then randomly choose new locations until happy."
while not self.happy(agents):
self.draw_location()
Testing the agent class:
= Agent(0, num_neighbors=4, require_same_type=2)
A type(A)
__main__.Agent
A.location
(0.9530720480796, 0.566243724536969)
Creating a list of agents:
4)
np.random.seed(= [Agent(0, 4, 2) for i in range(100)]
agents 1, 4, 2) for i in range(100))
agents.extend(Agent(len(agents)
200
Is agent three happy?
= agents[3]
a3 a3.happy(agents), a3.location
(False, (0.9762744547762418, 0.006230255204589863))
Let’s let agent three update its position:
a3.update(agents)
3].happy(agents), agents[3].location agents[
(True, (0.06780815958339637, 0.961674586087924))
Observation function
We implement a function to plot the distribution of agents.
def plot_distribution(agents, cycle_num, ax=None):
"Plot the distribution of agents after cycle_num rounds of the loop."
= [], []
x_values_0, y_values_0 = [], []
x_values_1, y_values_1 # == Obtain locations of each type == #
for agent in agents:
= agent.location
x, y if agent.type == 0:
x_values_0.append(x)
y_values_0.append(y)else:
x_values_1.append(x)
y_values_1.append(y)if ax is None: fig, ax = plt.subplots(figsize=(4, 4))
= {'markersize': 4, 'alpha': 0.6}
plot_args # ax.set_facecolor('azure')
'o', markerfacecolor='orange', **plot_args)
ax.plot(x_values_0, y_values_0, 'o', markerfacecolor='green', **plot_args)
ax.plot(x_values_1, y_values_1, ; ax.set_yticks([])
ax.set_xticks([])f'Cycle {cycle_num}') ax.set_title(
Testing the observation function,
= 250
num_of_type_0 = 250
num_of_type_1 = 10 # Number of agents regarded as neighbors
num_neighbors = 5 # Want at least this many neighbors to be same type
require_same_type
# == Create a list of agents == #
= [Agent(0, num_neighbors, require_same_type) for i in range(num_of_type_0)]
agents 1, num_neighbors, require_same_type) for i in range(num_of_type_1))
agents.extend(Agent(
0) plot_distribution(agents,
Simulation run
10) # For reproducible random numbers
np.random.seed(
# == Main == #
= 250
num_of_type_0 = 250
num_of_type_1 = 10 # Number of agents regarded as neighbors
num_neighbors = 5 # Want at least this many neighbors to be same type
require_same_type
# == Create a list of agents == #
= [Agent(0, num_neighbors, require_same_type) for i in range(num_of_type_0)]
agents 1, num_neighbors, require_same_type) for i in range(num_of_type_1))
agents.extend(Agent(
= 1
count # == Loop until none wishes to move == #
= plt.subplots(2,3, figsize=(13, 6))
fig, axs
axs.flatten()
while True:
print('Entering loop ', count)
-1])
plot_distribution(agents, count, axs.flatten()[count+= 1
count
# Update and check whether everyone is happy
= True
no_one_moved for agent in agents:
= agent.location
old_location
agent.update(agents)if agent.location != old_location:
= False
no_one_moved if no_one_moved:
break
print('Converged, terminating.')
plt.tight_layout()
Entering loop 1
Entering loop 2
Entering loop 3
Entering loop 4
Entering loop 5
Entering loop 6
Converged, terminating.
In this instance, the program terminated after 6 cycles through the set of agents, indicating that all agents had reached a state of happiness.
Interpretation
What is striking about the pictures is how rapidly racial integration breaks down.
This is despite the fact that people in the model don’t actually mind living mixed with the other type.
Even with these preferences, the outcome is a high degree of segregation.
9.5 Challenges of agent-based modeling
- Performance Limitations. The execution speed of ABMs can be slow, which can be a limitation for extensive simulations.
- Transparency and Reproducibility. Providing a clear and accessible description is challenging due to model complexity.
- Data Parameters and Validation. Getting empirical data and validating models that may simulate unobservable associations is challenging.
- Arbitrariness and Parameterization. The many parameters that need to be set can lead to a high degree of arbitrariness.
- Behavior modeling. There are endless possibilities to design plausible behavioral rules. A sensitivity analysis is difficult.
Up next
- Reinforcement learning as a principled model for behavior to counter some arbitrariness in parameterization behavioral rules.
- Synthesis: Collective reinforcement learning dynamics to counter performance limitations and a lack of transparency and reproducibility.
9.6 Learning goals revisited
In this chapter,
we covered the history and rationale of agent-based modeling: generative social science.
We covered the advantages (flexibility and expressiveness) and challenges (transparency, arbitrariness, performance) of agent-based modeling
We implemented and simulated two famous agent-based models in Python: Conway’s Game of Life and Schelling’s segregation model.
We implement animations in Python using the matplotlib.animations.FuncAnimation
function.
We wrote Schelling’s segregation model as an object-oriented program.