[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tag-chain-of-thought":3},{"tag":4,"articles":10},{"id":5,"name":6,"slug":6,"article_count":7,"description_zh":8,"description_en":9},"e61cb9bd-6313-4d74-81a1-4614874757e9","chain-of-thought",4,"Chain-of-thought 著重模型如何把多步推理串起來，而不只是給出最後答案。這個主題涵蓋長鏈推理、agent 迴圈、結構化輸出與長上下文下的穩定性，對評估與部署 LLM 很重要。","Chain-of-thought focuses on how models connect intermediate reasoning steps, not just final answers. It includes long-horizon benchmarks, agent loops, structured outputs, and stability under long context, all of which matter when evaluating and deploying LLMs.",[11,20],{"id":12,"slug":13,"title":14,"summary":15,"category":16,"image_url":17,"cover_image":17,"language":18,"created_at":19},"9f62add5-cae5-47eb-abd5-2e56d0d5698c","longcot-long-horizon-chain-of-thought-benchmark-en","LongCoT Benchmark: 2,500-Probl. Long-Horizon Reasoning","LongCoT is a 2,500-problem benchmark for measuring whether frontier models can sustain long, interdependent reasoning chains.","research","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776319782523-s0wz.png","en","2026-04-16T06:09:23.265233+00:00",{"id":21,"slug":22,"title":23,"summary":24,"category":25,"image_url":26,"cover_image":26,"language":18,"created_at":27},"28a1b97c-06c1-4112-8fb5-a9ff8e58fcd9","prompt-engineering-agents-structured-outputs-en","Prompt Engineering for Agents and Structured Outputs","Prompt engineering gets harder in production: reasoning, long contexts, JSON contracts, and agent loops all need different prompt tactics.","ai-agent","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775164941484-fp41.png","2026-04-02T21:21:45.840568+00:00"]