Supplementary Information for Collective Cooperative Intelligence

Author

Wolfram Barfuss et al.

Published

June 24, 2024

1 Introduction

Collective cooperation – in which intelligent actors in complex environments seek ways to improve their joint well-being – is critical for a sustainable future, yet unresolved. Mathematical models are essential for moving forward with this challenge. Our perspective paper argues that building bridges between CSS and MARL offers a more robust understanding of the drivers, mechanisms, and dynamics of collective cooperation from intelligent actors in dynamic environments. Both fields complement each other in their goals, methods, and scope.

This supplementary information presents a more detailed background on the literature (Chapter 2). Furthermore, we give all the details regarding the collective reinforcement learning dynamics we employ (Chapter 3) and how we applied it to create all complex phenomena presented in the main text (Chapter 4 - Chapter 7). Chapter 8 contains all required simulation scripts.

Reproducibility

This supplementary information was created in a fully reproducible writing and computing environment with the help of nbdev and quarto. If you are reading the PDF or web version of this document, you can find the source code in form of Jupyter notebooks at https://github.com/wbarfuss/collective-cooperative-intelligence.

To reproduce all simulations, create a new conda environment with the provided pythonenvironment.yml file.

conda env create -f pythonenvironment.yml

This installs also the Collective Reinforcement Learning Dynamics in Python. They are provided by a separate Python package, which is in its early stages of development.

You activate the environment with:

conda activate cocoin

Afterwards, you should be able to follow along and execute all notebooks.

If you have any feedback, questions or problems with code, please do not hesitate to open a Github issue here: https://github.com/wbarfuss/collective-cooperative-intelligence/issues.