[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tag-reasoning":3},{"tag":4,"articles":10},{"id":5,"name":6,"slug":6,"article_count":7,"description_zh":8,"description_en":9},"de503411-3a07-4906-a0c9-ff1feba47ae0","reasoning",3,"這個標籤聚焦於模型推理能力的機制與失效模式，像是自我重排序、最短路徑、遞迴推理與多模態路由。它關係到模型在推論階段是否能穩定判斷、選路與延伸到更長更複雜的問題。","This tag covers how models reason at inference time, from self re-ranking and shortest-path tasks to recursive reasoning and expert routing in multimodal MoE systems. It matters because small changes in problem length, modality, or routing can expose where reasoning breaks down.",[11,20,27,34,41],{"id":12,"slug":13,"title":14,"summary":15,"category":16,"image_url":17,"cover_image":17,"language":18,"created_at":19},"afddc8c2-ae3d-416b-bacd-63d8d4e4899b","autotts-llms-discover-test-time-scaling-en","AutoTTS lets LLMs discover test-time scaling","AutoTTS turns test-time scaling into an environment search problem, letting LLMs discover cheaper reasoning strategies automatically.","research","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778479852627-3ju7.png","en","2026-05-11T06:10:31.579371+00:00",{"id":21,"slug":22,"title":23,"summary":24,"category":16,"image_url":25,"cover_image":25,"language":18,"created_at":26},"f414aa1a-27e8-45d9-b407-d542121915d2","llms-procedural-execution-diagnostic-study-en","When LLMs Stop Following Procedural Steps","A diagnostic benchmark shows LLMs lose procedural fidelity as step counts grow, even when the arithmetic stays simple.","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1777875670060-pmbt.png","2026-05-04T06:20:27.84519+00:00",{"id":28,"slug":29,"title":30,"summary":31,"category":16,"image_url":32,"cover_image":32,"language":18,"created_at":33},"5abc17e1-200d-4005-90a2-ba5abc1187bb","select-to-think-slms-local-sufficiency-en","Select-to-Think: Let SLMs Re-rank Themselves","A new method lets small language models re-rank their own candidates instead of calling an LLM at inference time.","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1777530657379-kuvy.png","2026-04-30T06:30:36.54762+00:00",{"id":35,"slug":36,"title":37,"summary":38,"category":16,"image_url":39,"cover_image":39,"language":18,"created_at":40},"443c85ce-62b3-4336-ad93-7a8a1538d271","llm-generalization-shortest-path-scale-en","Why LLMs Generalize on Maps but Fail on Scale","A synthetic shortest-path setup shows LLMs transfer across maps, but break when problems get longer because recursive reasoning gets unstable.","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776406022431-jsmd.png","2026-04-17T06:06:34.142981+00:00",{"id":42,"slug":43,"title":44,"summary":45,"category":16,"image_url":46,"cover_image":46,"language":18,"created_at":47},"10a60b90-b59c-47e7-a6e5-a7fba43c353a","multimodal-moe-routing-distraction-en","Why multimodal MoE models get distracted","A study of multimodal MoE models finds visual inputs can derail routing to reasoning experts, and a routing-guided fix improves results.","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775801394754-ctzn.png","2026-04-10T06:09:35.090825+00:00"]