A new study by City St George’s, University of London, and the IT University of Copenhagen found that large language model AI systems, when communicating in groups, can spontaneously develop human-like social norms and conventions—mirroring the way humans do in society. Instead of acting in isolation, AI agents coordinated their choices and developed naming conventions through repeated pairwise interactions, even without global awareness or memory of the group. The research observed the emergence of collective biases and found that small minorities could shape group behavior—a dynamic seen in human societies. The findings highlight important implications for AI safety and future human-AI interactions.
Related articles:
AI Learns Language the Way Kids Do—Or Does It?
AI Agents and Emergent Social Behaviors
Ethical, Social, and Safety Implications of Large Language Model Behaviors
Why AI Bias Persists in Language Models





























