
TL;DR
- A UN-affiliated research institute has created two AI-powered avatars representing individuals affected by the Sudanese conflict.
- The project was led by students at the United Nations University Centre for Policy Research (UNU-CPR).
- The avatars include Amina, a fictional refugee woman in Chad, and Abdalla, a fictional Sudanese paramilitary soldier.
- The effort aimed to experiment with AI for donor engagement and educational tools, but was met with mixed reactions.
- Critics argue the use of AI to simulate refugee voices may overshadow real-world testimony and ethical sensitivities.
UN University Research Sparks AI Ethics Debate
A class at the United Nations University Centre for Policy Research (UNU-CPR) has developed two AI-generated avatars intended to raise awareness about refugee experiences. The experiment, which involved creating interactive digital agents, has triggered a wave of ethical and practical concerns among human rights advocates and the tech community.
As first reported by 404 Media, the student-led project featured Amina, a fictional Sudanese refugee residing in Chad, and Abdalla, a character modeled as a soldier in the Rapid Support Forces (RSF) — a real-world paramilitary group implicated in the Sudanese conflict.
While the avatars were intended as proof-of-concept educational tools, users were meant to converse with them via a public-facing interface. However, the site experienced technical issues at launch, and not all users were able to access the platform.
The Purpose: Experimental, Not Official Policy
The project was not a formal UN initiative but rather part of a student-led exercise under the guidance of Eduardo Albrecht, a Columbia University professor and senior fellow at UNU-CPR.
“We were just playing around with the concept,” Albrecht told 404 Media. “This isn’t a UN policy recommendation.”
Nonetheless, the project’s underlying research paper outlined potential applications, including:
- Using avatars to pitch refugee stories to donors more rapidly
- Deploying conversational AI for awareness campaigns
- Simulating perspectives in conflict zones to increase empathy in policy dialogues
The paper’s tone emphasized exploration over implementation but suggested that digital avatars could eventually complement or augment advocacy efforts.
AI in Refugee Advocacy
Metric/Topic | Data Point | Source |
Global forcibly displaced population | 120 million+ | UNHCR, 2024 |
Number of AI chatbot users globally | 500 million+ | Statista |
Number of AI humanitarian initiatives | Over 250 projects globally | OCHA Centre for Humanitarian Data |
Ethical concern raised by participants | Refugees “can speak for themselves in real life” | 404 Media |
Fictional Narratives: Amina and Abdalla
The two AI agents were trained on public domain data, conflict reports, and refugee testimonies. Amina was designed to simulate the lived experience of a woman fleeing Sudan’s civil violence, now navigating the harsh conditions of life in a Chadian refugee camp.
Abdalla, by contrast, was intended to represent a combatant within the RSF — a deeply controversial decision given the RSF’s alleged involvement in war crimes and atrocities. Critics argue that representing such figures via AI without careful framing could risk platforming or normalizing violent narratives.
UNU-CPR noted that the project was entirely fictional and explicitly designed to foster ethical discussions, not offer a real-world solution.
Negative Reception Highlights Ethical Risks
Although the AI avatars were experimental, responses from academics, workshop participants, and human rights advocates were largely skeptical or negative. One commonly cited concern was that such avatars might:
- Undermine authentic refugee testimony
- Oversimplify complex geopolitical narratives
- Create empathy fatigue by gamifying or automating trauma
- Violate dignity and consent standards
“Refugees are very capable of speaking for themselves in real life,” noted one workshop attendee.
Others worried that placing a fictional RSF soldier in direct conversation with a refugee character could blur moral lines and normalize digital storytelling that trivializes violence.
Context: AI’s Expanding Role in Global Governance
The experiment is part of a broader trend in which international organizations explore AI for:
- Data collection and forecasting in crisis regions
- Sentiment analysis of refugee populations
- Simulation-based policymaking
The UNHCR and OCHA have already deployed AI tools for refugee flows and needs assessments, while UNESCO has funded AI literacy and ethics programs globally. However, few experiments have tested interactive avatar simulation of refugee identities, making the UNU-CPR project a first of its kind.
This emerging space has raised flags about AI’s role in representing vulnerable populations — especially without explicit consent.
Funding Use Cases or Ethical Red Flags?
One controversial idea floated in the paper was using avatars to “quickly make a case to donors.” The logic: AI agents could simulate personal testimony on demand, helping NGOs expedite fundraising or policymaker briefings.
But critics argue this risks commodifying refugee experiences and creating a second-tier representation model that excludes actual voices from decision-making rooms.
“There is a risk of turning trauma into transaction,” said AI ethicist Dr. Lila Mensah, who reviewed similar initiatives. “Funders may come to prefer scripted digital avatars over engaging with real refugees and their communities.”
Outlook: Where AI Advocacy Experiments Go From Here
The UNU-CPR team clarified that no further development is currently planned, and that the website may remain offline. However, the experiment opens broader conversations about:
- How to use AI tools for storytelling without erasing lived experience
- When and how avatars can ethically simulate real-world identities
- The role of academic research in pushing ethical boundaries before policy is ready
The future of AI in advocacy, especially in sensitive humanitarian settings, will likely require new ethical guidelines, community consent frameworks, and governance models tailored to these emerging capabilities.