Let us explore a brand new paper today, regarding multi-agent in group chats.
Paper: "HUMA: A Humanlike Multi-user Agent for Group Chats"
Idea: Build an AI that can discuss in group chats seemlessly as a Human.
Problem
We all know first hand, group chats tend to be chaotic. This could mean overlapping messages, interruptions, emojis, and silent lurkers. Standard AIs are good at 1:1 conversation but they start to feel robotic in group chats. The basic give aways are, AI tends to be too quick, too perfect, and have no latency.
The Solution

HUMA is designed to act as a "...
Let us explore a brand new paper today, regarding multi-agent in group chats.
Paper: "HUMA: A Humanlike Multi-user Agent for Group Chats"
Idea: Build an AI that can discuss in group chats seemlessly as a Human.
Problem
We all know first hand, group chats tend to be chaotic. This could mean overlapping messages, interruptions, emojis, and silent lurkers. Standard AIs are good at 1:1 conversation but they start to feel robotic in group chats. The basic give aways are, AI tends to be too quick, too perfect, and have no latency.
The Solution

HUMA is designed to act as a "community manager". It welcomes users, asks questions, bridges topics, and keeps vibe positive. It is doing this all while mimicking human delays, typing indicators, and reactions.
Why HUMA? It is adding AI integrations in the communities without adding the flair of automation.
How HUMA is Built
Event-driven system on LLMs:
Router: It chooses from 20 different strategies which can include ask question, react emoji, or stay silent. It generates and uses scores for the fit and variety.
Action Agent: The actions taken by HUMA. It can send message, reply, or add a reaction. It also handles interruptions with "scratchpad" memory.
Reflection: Post-action it reviews everything for consistency.
This also adds humanlike delays (simulate typing ~50-100 wpm), typing indicators, and interruptions.
Significant Results
HUMA was tested in 4-person chats about AI art generation (total 97 participants).
The following two criteria were used for evaluation:
Detection (Turing-like): Humans guessed correctly only ~50% time, which is 50:50 coin flip. Interestingly, 53% mistook real humans for AI.
Experience: HUMA scored nearly as high as humans (effectiveness 4.14 vs 4.48/5). Engagement and presence was very similar for both humans and AI. Speed / Grammer couldnt not reliably distinguish between AI and humans.
Why This Changes Things
This basically proves AIs can facilitate groups indistinguishably in short sessions. It is huge for online communities, moderation, and support.
Potential: This has a potential to create more natural social AIs everywhere. Think of how much it can help with support experiences.
Limitations
As with anything in life, there are always some limitations. HUMA currently only supports short chats, very specific topic (e.g. AI art), and has no long-term memory.
Risks included if misused for manipulation.
Still, this ia massive step toward humanlike group AI.
Share your thoughts? Are group chat AIs going to fool us soon?
References
[1] Paper PDF: https://arxiv.org/pdf/2511.17315