The basis for the learning dynamics is the multi-agent environment interface (MAEi) (Figure 3.1), which itself is based in its most basic form on the formal framework of stochastic games, also known as Markov games (Littman, 1994), which consist of the elements .
In an MAEi, agents reside in an environment of states . In each state , each agent has a maximum of available actions to choose from. is the joint-action set where denotes the cartesian product over the sets indexed by . Agents choose their actions simultaneously. A joint action is denoted by . With we denote the joint action except agent ’s, and we write the joint action in which agent chooses and all other agents choose as . We chose an equal number of actions for all states and agents out of notational convenience.
The transition function determines the probabilistic state change. is the transition probability from current state to next state under joint action . Throughout this work, we restrict ourselves to ergodic environments without absorbing states.
The reward function maps the triple of current state , joint action and next state to an immediate reward scalar for each agent. is the reward agent receives. Note that the reward function is often defined as depending only on the current state and joint action, . Our formulation maps onto this variant by averaging out the transition probabilities towards the next state according to .
In principle, agents could condition their probabilities of choosing action on the entire history of past play. However, doing so is not only cognitively demanding. It also requires that agents observe all other agents’ actions. Therefore, we focus our analysis on simple, so-called Markov strategies, with which agents choose their actions based only on the current state: . is the probability that agent chooses action given the environment is in state . We denote the joint strategy by .
Ecological Tipping Environment
We illustrate an application of the multi-agent environment interface by specifying a concrete environment that allows studying the prospects of collective action under environmental tipping elements (Barfuss et al., 2020).
Figure 3.2: Ecological Tipping Environment
It is available in the Python package via:
from pyCRLD.Environments.EcologicalPublicGood import EcologicalPublicGood as EPGenv = EPG(N=2, f=1.2, c=5, m=-5, qc=0.2, qr=0.01)
The environmental state set consists of two states, a prosperous and a degraded one, .
env.Sset
['g', 'p']
In each state , each agent can choose from their action set between either cooperation or defection, .
env.Aset
[['c', 'd'], ['c', 'd']]
We denote the number of cooperating (defecting) agents by ().
A collapse from the prosperous state to the degraded state occurs with transition probability, with being the collapse leverage parameter, indicating how much impact a defecting agent exerts on the environment. Thus, the environment remains within the prosperous state with probability, .
In the degraded state, we set the recovery to occur with probability, independent of the agents’ actions. The parameter sets the recovery probability, and the probability that the environment remains degraded is, thus, .
Rewards in the prosperous state follow the standard public good game, where denotes the cost of cooperation and , the cooperation synergy factor.
env.R[0, 1, :, :, 1]
array([[ 1., -2.],
[ 3., 0.]])
env.R[1, 1, :, :, 1]
array([[ 1., 3.],
[-2., 0.]])
However, when a state transition involves the degraded state, , the agents receive an environmental collapse impact, ,
For illustration purposes, we set the model’s parameters as , and :
env = EPG(N=2, f=1.2, c=5, m=-5, qc=0.2, qr=0.01)
Reinforcement learning
Learning helps agents adjust their behavior to changes in their environment, both from other agents and external factors. This is essential when the future is unpredictable, unknown, and complex, and thus, detailed pre-planning is doomed to failure.
In particular, reinforcement learning is a trial-and-error method of mapping situations to actions to maximize a numerical reward signal (Sutton & Barto, 2018). When rewards are a delayed consequence of current actions, so-called temporal-difference or reward-prediction learning has been particularly influential (Sutton, 1988). This type of learning summarizes the difference between value estimates from past and present experiences into a reward-prediction error, which is then used to adapt the current behavior to gain more rewards over time. There also exist remarkable similarities between computational reinforcement learning and the results of neuroscientific experiments (Dayan & Niv, 2008). Dopamine conveys reward-prediction errors to brain structures where learning and decision-making occur (Schultz et al., 1997). This dopamine reward-prediction error signal constitutes a potential neuronal substrate for the essential economic decision quantity of utility(Schultz et al., 2017).
In the following, we present the essential elements of the reinforcement learning update.
Gain
We assume that at each time step , each agent strives to maximize its exponentially discounted sum of future rewards,
where is the reward agent receives at time step , and is the discount factor of agent The discount factor regulates how much an agent cares for future rewards, where close to means that it cares for the future almost as much for the present and close to means that it cares almost only for immediate rewards. denotes a normalization constant. It is either , or While machine learning researchers often use , the pre-factor has the advantage of normalizing the gains, , to be on the same numerical scale as the rewards.
Value functions
Given a joint strategy , we define the state values, , as the expected gain, , when starting in state and then following the joint strategy, ,
Analogously, we define the state-action values, , as the expected gain, , when starting in state , executing action , and then following the joint strategy, ,
From Equation 3.1 and Equation 3.2, we can obtain the famous Bellman equation as follows, denoting the next state as ,
Analogously, we can write for the state-action values,
Thus, the value function can be expressed via a recursive relationship. The value of a state equals the discounted value of the next state () plus the reward the agent receives along the way, properly normalized (). This recursion will come in useful for learning (see Section 3.3.4).
Strategy function
In general, reinforcement learning agents do not know the true state and state-action values, , and . Instead, they hold variable beliefs about the quality of each available action in each state . The higher an agent believes an action brings value, the more likely it will choose it. We parameterize the agents’ behavior according to the soft-max strategy function,
where the intensity-of-choice parameters, , regulate the exploration-exploitation trade-off. For high , agents exploit their learned knowledge about the environment, leaning toward actions with high estimated state-action values. For low , agents are more likely to deviate from these high-value actions to explore the environment further with the chance of finding actions that eventually lead to even higher values. This soft-max strategy function can be motivated by the maximum-entropy principle (Jaynes & Bretthorst, 2003), stating that the current strategy of an agent should follow a distribution that maximizes entropy subject to current beliefs about the qualities (Wolpert, 2006; Wolpert et al., 2012).
Learning
Learning means updating the quality estimates, , with the current reward-prediciton error, , after selection action in state according to
where is the learning rate of agent , which regulates how much new information the agent uses for the update. The reward-prediction error, , equals the difference of the new quality estimate, , and the current quality estimate, ,
where the represents the quality estimate of the next state and represents the quality estimate of the current state. Depending on how we choose, , and , we recover various well-known temporal-difference reinforcement learning update schemes (Barfuss et al., 2019).
Variants
For example, if , we obtain the so called SARSA update,
If , and , we obtain the famous Q-learning update,
And if is a separate state-value estimate, we obtain an actor-critic update,
Collective Reinforcement Learning Dynamics (CRLD)
Motivation
In Section 3.3, we saw how to derive temporal-difference reward-prediction reinforcement learning from first principles. Agents strive to improve their discounted sum of future rewards (Equation 3.1) while acting according to the maximum entropy principle (Equation 3.5). However, using these standard reinforcement algorithms directly for modeling comes also with some challenges:
First of all, the learning is highly stochastic, since, in general, all agents strategies , and the environments transition function are probability distributions.
This stochasticity can make it sometimes hard to explain, why a phenomenon occurred in a simulation.
Reinforcement learning is also very sample-inefficient, meaning it can take the agents a long time to learn something.
Thus, learning simulations are computationally intense, since one requires many simulations to make sense of the stochasticity, of which each takes a long time to address the sample inefficiency.
How can we address these challenges? In Section 3.3.4, we saw that we could express different reward-prediction learning variants by formulating different reward-prediction errors, . The essential idea of the collective reinforcement learning dynamics approach is to replace the individual sample realizations of the reward-prediction error with its strategy average plus a small error term,
Thus, collective reinforcement learning dynamics describe how agents with access to (a good approximation of) the strategy-average reward-prediction error would learn. There are at least three interpretations to motivate how the agents can obtain the strategy averages:
The agents are batch learners. They store experiences (state observations, rewards, actions, next state observations) inside a memory batch and replay these experiences to make the learning more stable. In the limit of an infinite memory batch, the error term vanishes, (Barfuss, 2020).
The agents learn on two different time scales. On one time scale, the agents interact with the environment, collecting experiences and integrating them to improve their quality estimates while keeping their strategies fixed. On the other time scale, they use the accumulated experiences to adapt their strategy. In the limit of a complete time scale separation, having infinite experiences between two strategy updates, the error term vanishes, (Barfuss, 2022).
The agents have a model of how the environment works, including how the other agents behave currently, but not how the other agents learn. This model can be used to stabilize learning. In the limit of a perfect model (and sufficient cognitive resources), the error term vanishes, .
In the following, we focus on the idealized case of a vanishing error term, .
where we have also replaced the sample reward-prediction error, , with its strategy average, . Thus, in the remainder, we can focus on obtaining the strategy-average reward-prediction error, . We equip a symbol with a straight bar on top to denote the averaging with the current joint policy . From Equation 3.7, we see that we need to construct the strategy-average reward, the strategy-average value of the next state, and the strategy-average value of the current state.
Equation 3.8 suggests summarizing the product of the learning rate and the intensity-of-choice into an effective learning rate . If we restate the denominator by , we recover exactly the form used in the main text,
Rewards
The strategy-average version of the current reward is obtained by considering each agent taking action in state when all other agents act according to their strategy , causing the environment to transition to the next state with probability , during which agent receives reward . Mathematically, we write,
Next values
The strategy average of the following state value is likewise computed by averaging over all actions of the other agents and following states.
We start with the simplest learning variant, actor-critic learning. For each agent , state , and action , all other agents choose their action with probability . Consequently, the environment transitions to the next state with probability . At , the agent estimates the quality of the next state to be of . Mathematically, we write,
We obtain the strategy-average value estimate of the following state precisely as the state values of the following state, , as defined in Equation 3.2. We compute them by writing the Bellman equation, in matrix form, which allows us to bring all state value variables on one site through a matrix inversion, 𝟙
Here, is the strategy-average reward value agent receives in state . They are computed by averaging over all agents’ strategies, , and the state transition ,
And are the strategy-average transition probabilities. They are computed by averaging over all agents’ strategies, ,
Last, 𝟙, is the -by- identity matrix.
For SARSA learning, the strategy average of the following state value reads,
where we replace by the strategy-average next-state next-action value .
Here, the strategy-average state-action values, , are exaclty the state-action values defined in Equation 3.3. We compute them exactly as Equation 3.3 prescribes,
where is the strategy-average transition model from the perspective of agent . It can be computed by averaging out all other agents’ strategies from the transition tensor,
However, it is easy to show that , and thus, the strategy-average next-state values of SARSA and actor-critic learning are indeed identical.
Current values
The strategy-average of the current state value in the reward-prediction error of actor-critic learning, , is - for each agent and state - a constant in actions. Thus, they do not affect the joint strategy update (Equation 3.8).
The state-action value of the current state, , in SARSA learning becomes, , in the strategy-average reward-prediction error and can be seen as a regularization term. We can derive it by inverting Equation 3.5, and realizing that the dynamics induced by Equation 3.8 are invariant under additive transformations, which are constant in actions.
Reward-prediction error
Together, the strategy-average reward-prediction error for actor-critic learning reads, and the strategy-average actor-critic learning dynamics, thus, With being the fitness of agent ’s action in state , these dynamics are exactly equivalent to the alternative replicator dynamics in discrete time (Hofbauer & Sigmund, 2003).
For SARSA learning, the strategy-average reward-prediction error reads, and the strategy-average SARSA learning dynamics, thus,
Barfuss, W. (2020). Reinforcement Learning Dynamics in the Infinite Memory Limit. Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems, 1768–1770.
Barfuss, W. (2022). Dynamical systems as a level of cognitive analysis of multi-agent learning. Neural Computing and Applications, 34(3), 1653–1671. https://doi.org/10.1007/s00521-021-06117-0
Barfuss, W., Donges, J. F., & Kurths, J. (2019). Deterministic limit of temporal difference reinforcement learning for stochastic games. Physical Review E, 99(4), 043305. https://doi.org/10.1103/PhysRevE.99.043305
Barfuss, W., Donges, J. F., Vasconcelos, V. V., Kurths, J., & Levin, S. A. (2020). Caring for the future can turn tragedy into comedy for long-term collective action under risk of collapse. Proceedings of the National Academy of Sciences, 117(23), 12915–12922. https://doi.org/10.1073/pnas.1916545117
Dayan, P., & Niv, Y. (2008). Reinforcement learning: The Good, The Bad and The Ugly. Current Opinion in Neurobiology, 18(2), 185–196. https://doi.org/10.1016/j.conb.2008.08.003
Littman, M. L. (1994). Markov games as a framework for multi-agent reinforcement learning. In W. W. Cohen & H. Hirsh (Eds.), Machine Learning Proceedings 1994 (pp. 157–163). Morgan Kaufmann. https://doi.org/10.1016/B978-1-55860-335-6.50027-1
Schultz, W., Stauffer, W. R., & Lak, A. (2017). The phasic dopamine signal maturing: From reward via behavioural activation to formal economic utility. Current Opinion in Neurobiology, 43, 139–148. https://doi.org/10.1016/j.conb.2017.03.013
Sutton, R. S. (1988). Learning to predict by the methods of temporal differences. Machine Learning, 3(1), 9–44. https://doi.org/10.1007/BF00115009
Sutton, R. S., & Barto, A. G. (2018). Reinforcement learning: An introduction (Second edition). The MIT Press.
Wolpert, D. H. (2006). Information Theory - The Bridge Connecting Bounded Rational Game Theory and Statistical Physics. In D. Braha, A. A. Minai, & Y. Bar-Yam (Eds.), Complex Engineered Systems: Science Meets Technology (pp. 262–290). Springer. https://doi.org/10.1007/3-540-32834-3_12
Wolpert, D. H., Harré, M., Olbrich, E., Bertschinger, N., & Jost, J. (2012). Hysteresis effects of changing the parameters of noncooperative games. Physical Review E, 85(3), 036102. https://doi.org/10.1103/PhysRevE.85.036102