Skip navigation
Skip navigation
You are using an outdated browser. Please upgrade your browser.

Rather than exhibiting random cooperation behaviours, GPT seems to pursue a goal of maximising conditional welfare that mirrors human cooperation

ChatGPT’s software engine, GPT, cooperates more than humans, expects humans to cooperate more, and displays hyper-rationality in its goals, finds new research from the University of Mannheim Business School (UMBS).

In November 2022, OpenAI introduced ChatGPT, a chatbot driven by a Large Language Model (LLM) named GPT. Prof. Dr. Kevin Bauer, Assistant Professor of E-Business and E-Government from the UMBS, and colleagues investigated how GPT cooperates with humans through the prisoner’s dilemma game.

GPT played the game with a human. The first player chooses to cooperate or defect from the second player, before the second player makes their choice. Players can cooperate for mutual benefit or betray their counterpart and defect for individual reward. GPT also estimated the likelihood of human cooperation dependent upon its own choice as the first player, and every player explained their choice and expectations as first player and choice as second player.

As well as finding that GPT cooperates more than humans, researchers found that GPT is considerably more optimistic about human cooperation. Additional analyses of GPT’s choices also revealed that its behaviour isn’t random.

Prof. Dr. Bauer says, “Rather than exhibiting random cooperation behaviours, GPT seems to pursue a goal of maximising conditional welfare that mirrors human cooperation patterns. As the conditionality refers to holding relatively stronger concerns for its own compared to human payoffs, this behaviour may be indicative of a strive for self-preservation.”

Prof. Dr. Bauer continues to explain that as we become an AI-integrated society, we must understand that models like GPT don’t just compute and process data – they can learn and adopt various aspects of human nature. We must carefully monitor the values and principles we instil in them to ensure AI serves our aspirations and values.

This research has been published as a working paper.

/ENDS

For more information, a copy of the research, or to find out more from Prof. Dr. Bauer, please contact Kyle Grizzell from BlueSky Education on +44 (0) 1582 790709 or kyle@bluesky-pr.com

This press release was distributed by ResponseSource Press Release Wire on behalf of BlueSky Education in the following categories: Business & Finance, Public Sector, Third Sector & Legal, Manufacturing, Engineering & Energy, for more information visit https://pressreleasewire.responsesource.com/about.