[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tag-rlvr":3},{"tag":4,"articles":11},{"id":5,"name":6,"slug":7,"article_count":8,"description_zh":9,"description_en":10},"026e6191-9e7a-412e-a53b-885160b85bc2","RLVR","rlvr",3,"RLVR（reinforcement learning with verifiable rewards）指的是以可驗證回饋訓練模型，常見於數學、程式與推理任務。重點不在主觀偏好，而是用正確答案、單元測試或規則檢查來驅動學習，也因此牽動冷啟動、探索與穩定性等問題。","RLVR, or reinforcement learning with verifiable rewards, trains models on tasks where success can be checked objectively: math proofs, coding problems, unit tests, or rule-based outputs. It matters because reward design here shapes cold-start behavior, exploration, and training stability.",[12],{"id":13,"slug":14,"title":15,"summary":16,"category":17,"image_url":18,"cover_image":18,"language":19,"created_at":20},"dbcae3bd-5f14-4baf-9604-0011f7382732","tsallis-loss-reasoning-model-training-en","Tsallis loss for faster reasoning-model training","A Tsallis-loss continuum may help reasoning models escape cold-start stalls faster than RLVR, with tradeoffs between speed, noise, and stability.","research","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1777443011556-1zy3.png","en","2026-04-29T06:09:38.777932+00:00"]