UNSW Fellowship backs GenAISim policy simulator
Dr Yunchen Hua won a UNSW fellowship to build GenAISim and SOCIA, tools that turn policy questions into AI simulations.

UNSW awarded Dr Yunchen Hua a fellowship to advance GenAISim and its SOCIA simulation tool.
UNSW has given Dr Yunchen (Devin) Hua a place in its Founders Engineering Spinout Fellowship, a 12-month program aimed at turning research into products people can actually use. The project sits inside the ARC Centre of Excellence for Automated Decision-Making and Society and focuses on a generative AI simulation platform for decision-makers.
The timing matters because the work is already moving beyond theory. Hua’s team has a paper accepted for presentation at the 64th Annual Meeting of the Association for Computational Linguistics in July, and the fellowship gives the project a year of focused commercialisation support.
| Item | Number | Why it matters |
|---|---|---|
| Fellowship length | 12 months | Time to turn research into a usable product |
| Conference | 64th ACL Annual Meeting | Signals peer-reviewed research credibility |
| Publication title | SOCIA-EVO | Shows the simulator work has formal academic output |
| Article date | 4 May 2026 | Places the announcement in current UNSW activity |
What GenAISim is trying to fix
Get the latest AI news in your inbox
Weekly picks of model releases, tools, and deep dives — no spam, unsubscribe anytime.
No spam. Unsubscribe at any time.
Policy teams keep running into the same problem: they need to understand how an intervention might play out before they commit to it, but most simulation tools are hard to build, hard to explain, and hard to update when new evidence arrives. GenAISim tries to make that process more practical by pairing generative AI with simulation workflows that can handle multi-stakeholder interactions.

At the center of the project is GenAISim: Simulation in the Loop for Multi-Stakeholder Interactions with Generative Agents, which is designed to help decision-makers test “what-if” scenarios. That is a useful idea for urban planning, social policy, and any setting where a spreadsheet cannot capture how people, institutions, and incentives react to each other.
The article makes a clear claim: the platform is meant to support decisions, not replace them. That matters because the real value of these tools is usually in narrowing uncertainty, surfacing trade-offs, and showing where a policy might fail before it reaches the public.
- It translates policy requirements into simulation code semi-automatically.
- It keeps a human in the loop for refinement and review.
- It aims for transparency, so users can inspect how a scenario was built.
- It is built around real-world evidence, not just synthetic outputs.
SOCIA is the part that turns questions into code
The project’s technical core is SOCIA, short for Simulation Orchestration for Computational Intelligence with Agents. In plain English, it is a bridge between a policy question and an executable simulation. Instead of asking researchers to hand-build every scenario from scratch, the system helps generate the simulation structure and then lets humans refine it.
That approach is important because simulation work often fails on the last mile. A model can look elegant in a paper and still be useless to a planner if it is opaque, brittle, or impossible to adapt when the policy changes. SOCIA is trying to reduce that gap by making the construction process more automated while preserving oversight.
“It is truly through the collective efforts and collaboration of all the universities and research institutions involved that this project was established, which has given me the opportunity to pursue this line of research and ultimately led to this opportunity,” said Dr Hua.
Flora Salim, a Chief Investigator at ADM+S, also said the fellowship gives the team a path to practical impact. That is the right framing here: the real test is whether a simulator like SOCIA can help public-sector and planning teams ask better questions before they lock in a decision.
Dr Hua’s research profile also explains why this project has traction. His work spans natural language processing, large language models, knowledge graphs, dialogue systems, machine learning, deep learning, reinforcement learning, and causality. That mix is exactly what you would want for a system that has to understand human policy language and turn it into something computational.
The academic and commercial signals line up
Commercialisation fellowships can be vague, but this one has a concrete research trail behind it. The SOCIA-EVO paper, authored by Devin Yuncheng Hua, Sion Weatherhead, Mehdi Jafari, Hao Xue, and Flora D. Salim, has been accepted by ACL, one of the most visible venues in language technology.

That matters because the project sits at the intersection of two worlds that often move at different speeds. Academic research rewards novelty and proof. Government and industry users care about reliability, interpretability, and whether the tool fits messy real-world constraints. A fellowship can help bridge that gap if the team uses the year well.
- UNSW Founders Engineering Spinout Fellowship: 12 months of support for early-career researchers.
- ADM+S: the research home for the GenAISim project.
- ACL 2026: a major venue for the technical paper behind SOCIA-EVO.
- UNSW: the university backing the spinout path.
If the team can keep the system transparent and easier to inspect than a black-box model, SOCIA could become useful for policy labs, city planners, and research groups that need scenario testing without building a custom simulator every time. If not, it risks becoming another clever demo that never leaves the lab.
What to watch next
The next 12 months will show whether GenAISim can move from promising research to something a decision-maker would trust with real planning questions. The practical benchmark is simple: can the team produce simulations that are understandable, editable, and credible enough for non-technical users to act on?
If they can, this fellowship may become a model for how AI research leaves the lab and enters policy work. If they cannot, the project still adds value by showing where automated simulation breaks down. Either way, the important question is no longer whether AI can generate code, but whether it can help people make better calls under uncertainty.
For readers tracking similar research-to-product moves, the more interesting story is how often universities back tools that sit between language models and public decision-making. That is where the next useful AI systems may come from.
// Related Articles
- [AGENT]
How to Switch AI Outputs from Markdown to HTML
- [AGENT]
Anthropic’s Cat Wu on proactive AI assistants
- [AGENT]
How to Run Hermes Agent on Discord
- [AGENT]
Why RAGFlow is the right open-source RAG engine to self-host
- [AGENT]
How to Add Temporal RAG in Production
- [AGENT]
GitHub Agentic Workflows puts AI agents in Actions