[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tag-rlvr":3},{"tag":4,"articles":11},{"id":5,"name":6,"slug":7,"article_count":8,"description_zh":9,"description_en":10},"026e6191-9e7a-412e-a53b-885160b85bc2","RLVR","rlvr",3,"RLVR（reinforcement learning with verifiable rewards）指的是以可驗證回饋訓練模型，常見於數學、程式與推理任務。重點不在主觀偏好，而是用正確答案、單元測試或規則檢查來驅動學習，也因此牽動冷啟動、探索與穩定性等問題。","RLVR, or reinforcement learning with verifiable rewards, trains models on tasks where success can be checked objectively: math proofs, coding problems, unit tests, or rule-based outputs. It matters because reward design here shapes cold-start behavior, exploration, and training stability.",[12,21],{"id":13,"slug":14,"title":15,"summary":16,"category":17,"image_url":18,"cover_image":18,"language":19,"created_at":20},"8e6e5e5b-c51f-495e-a596-203fb64c71eb","tsallis-loss-reasoning-model-training-zh","Tsallis loss 讓推理模型更快脫困","這篇論文用 Tsallis q-logarithm 搭出一條損失函數光譜，想解決推理模型在冷啟動時卡住的問題。它把 RLVR 和 latent trajectory 的 log-marginal-likelihood 串成可調參的連續體。","research","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1777443006073-083j.png","zh","2026-04-29T06:09:37.277494+00:00",{"id":22,"slug":23,"title":24,"summary":25,"category":17,"image_url":26,"cover_image":26,"language":19,"created_at":27},"2428c4f3-8cbf-43dc-afe8-dad89550740f","prerl-training-llms-in-pre-train-space-zh","PreRL：把強化學習搬進預訓練空間","PreRL 把 RL 從 P(y|x) 轉向 P(y)，直接在預訓練空間做獎勵更新，主打增強推理與探索。摘要也提到 NSR 與 DSRL 兩種設計。","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776319619099-op5n.png","2026-04-16T06:06:37.875971+00:00"]