[AGENT] 7 min readOraCore Editors

UNSW Fellowship backs GenAISim policy simulator

Dr Yunchen Hua won a UNSW fellowship to build GenAISim and SOCIA, tools that turn policy questions into AI simulations.

Share LinkedIn
UNSW Fellowship backs GenAISim policy simulator

UNSW awarded Dr Yunchen Hua a fellowship to advance GenAISim and its SOCIA simulation tool.

UNSW has given Dr Yunchen (Devin) Hua a place in its Founders Engineering Spinout Fellowship, a 12-month program aimed at turning research into products people can actually use. The project sits inside the ARC Centre of Excellence for Automated Decision-Making and Society and focuses on a generative AI simulation platform for decision-makers.

The timing matters because the work is already moving beyond theory. Hua’s team has a paper accepted for presentation at the 64th Annual Meeting of the Association for Computational Linguistics in July, and the fellowship gives the project a year of focused commercialisation support.

ItemNumberWhy it matters
Fellowship length12 monthsTime to turn research into a usable product
Conference64th ACL Annual MeetingSignals peer-reviewed research credibility
Publication titleSOCIA-EVOShows the simulator work has formal academic output
Article date4 May 2026Places the announcement in current UNSW activity

What GenAISim is trying to fix

Get the latest AI news in your inbox

Weekly picks of model releases, tools, and deep dives — no spam, unsubscribe anytime.

No spam. Unsubscribe at any time.

Policy teams keep running into the same problem: they need to understand how an intervention might play out before they commit to it, but most simulation tools are hard to build, hard to explain, and hard to update when new evidence arrives. GenAISim tries to make that process more practical by pairing generative AI with simulation workflows that can handle multi-stakeholder interactions.

UNSW Fellowship backs GenAISim policy simulator

At the center of the project is GenAISim: Simulation in the Loop for Multi-Stakeholder Interactions with Generative Agents, which is designed to help decision-makers test “what-if” scenarios. That is a useful idea for urban planning, social policy, and any setting where a spreadsheet cannot capture how people, institutions, and incentives react to each other.

The article makes a clear claim: the platform is meant to support decisions, not replace them. That matters because the real value of these tools is usually in narrowing uncertainty, surfacing trade-offs, and showing where a policy might fail before it reaches the public.

  • It translates policy requirements into simulation code semi-automatically.
  • It keeps a human in the loop for refinement and review.
  • It aims for transparency, so users can inspect how a scenario was built.
  • It is built around real-world evidence, not just synthetic outputs.

SOCIA is the part that turns questions into code

The project’s technical core is SOCIA, short for Simulation Orchestration for Computational Intelligence with Agents. In plain English, it is a bridge between a policy question and an executable simulation. Instead of asking researchers to hand-build every scenario from scratch, the system helps generate the simulation structure and then lets humans refine it.

That approach is important because simulation work often fails on the last mile. A model can look elegant in a paper and still be useless to a planner if it is opaque, brittle, or impossible to adapt when the policy changes. SOCIA is trying to reduce that gap by making the construction process more automated while preserving oversight.

“It is truly through the collective efforts and collaboration of all the universities and research institutions involved that this project was established, which has given me the opportunity to pursue this line of research and ultimately led to this opportunity,” said Dr Hua.

Flora Salim, a Chief Investigator at ADM+S, also said the fellowship gives the team a path to practical impact. That is the right framing here: the real test is whether a simulator like SOCIA can help public-sector and planning teams ask better questions before they lock in a decision.

Dr Hua’s research profile also explains why this project has traction. His work spans natural language processing, large language models, knowledge graphs, dialogue systems, machine learning, deep learning, reinforcement learning, and causality. That mix is exactly what you would want for a system that has to understand human policy language and turn it into something computational.

The academic and commercial signals line up

Commercialisation fellowships can be vague, but this one has a concrete research trail behind it. The SOCIA-EVO paper, authored by Devin Yuncheng Hua, Sion Weatherhead, Mehdi Jafari, Hao Xue, and Flora D. Salim, has been accepted by ACL, one of the most visible venues in language technology.

UNSW Fellowship backs GenAISim policy simulator

That matters because the project sits at the intersection of two worlds that often move at different speeds. Academic research rewards novelty and proof. Government and industry users care about reliability, interpretability, and whether the tool fits messy real-world constraints. A fellowship can help bridge that gap if the team uses the year well.

If the team can keep the system transparent and easier to inspect than a black-box model, SOCIA could become useful for policy labs, city planners, and research groups that need scenario testing without building a custom simulator every time. If not, it risks becoming another clever demo that never leaves the lab.

What to watch next

The next 12 months will show whether GenAISim can move from promising research to something a decision-maker would trust with real planning questions. The practical benchmark is simple: can the team produce simulations that are understandable, editable, and credible enough for non-technical users to act on?

If they can, this fellowship may become a model for how AI research leaves the lab and enters policy work. If they cannot, the project still adds value by showing where automated simulation breaks down. Either way, the important question is no longer whether AI can generate code, but whether it can help people make better calls under uncertainty.

For readers tracking similar research-to-product moves, the more interesting story is how often universities back tools that sit between language models and public decision-making. That is where the next useful AI systems may come from.