Tag
reasoning
This tag covers how models reason at inference time, from self re-ranking and shortest-path tasks to recursive reasoning and expert routing in multimodal MoE systems. It matters because small changes in problem length, modality, or routing can expose where reasoning breaks down.
5 articles

AutoTTS lets LLMs discover test-time scaling
AutoTTS turns test-time scaling into an environment search problem, letting LLMs discover cheaper reasoning strategies automatically.

When LLMs Stop Following Procedural Steps
A diagnostic benchmark shows LLMs lose procedural fidelity as step counts grow, even when the arithmetic stays simple.

Select-to-Think: Let SLMs Re-rank Themselves
A new method lets small language models re-rank their own candidates instead of calling an LLM at inference time.

Why LLMs Generalize on Maps but Fail on Scale
A synthetic shortest-path setup shows LLMs transfer across maps, but break when problems get longer because recursive reasoning gets unstable.

Why multimodal MoE models get distracted
A study of multimodal MoE models finds visual inputs can derail routing to reasoning experts, and a routing-guided fix improves results.