[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"article-why-coding-benchmarks-are-finally-telling-the-truth-zh":3,"tags-why-coding-benchmarks-are-finally-telling-the-truth-zh":36,"related-lang-why-coding-benchmarks-are-finally-telling-the-truth-zh":46,"related-posts-why-coding-benchmarks-are-finally-telling-the-truth-zh":50,"series-research-5b168b94-465a-4d72-bbb1-e6577625cb1a":87},{"id":4,"title":5,"content":6,"summary":7,"source":8,"source_url":9,"author":10,"image_url":11,"keywords":12,"language":18,"translated_content":10,"views":19,"is_premium":20,"created_at":21,"updated_at":21,"cover_image":11,"published_at":22,"rewrite_status":23,"rewrite_error":10,"rewritten_from_id":24,"slug":25,"category":26,"related_article_id":27,"status":28,"google_indexed_at":29,"x_posted_at":10,"tweet_text":10,"title_rewritten_at":10,"title_original":10,"key_takeaways":30,"topic_cluster_id":34,"embedding":35,"is_canonical_seed":20},"5b168b94-465a-4d72-bbb1-e6577625cb1a","為什麼程式碼基準測試終於開始說實話","\u003Cp data-speakable=\"summary\">LiveCodeBench 和 \u003Ca href=\"\u002Ftag\u002Fswe-bench\">SWE-bench\u003C\u002Fa> Pro 已經能更準確分出真正能寫程式的模型與只會刷榜的模型。\u003C\u002Fp>\u003Cp>我認為，程式碼模型的選型標準已經變了，現在再拿 HumanEval 當主要依據，是在做錯產品決策。BenchLM 2026 年 3 月的排行榜把這件事講得很直接：\u003Ca href=\"\u002Ftag\u002Fclaude-mythos\">Claude Mythos\u003C\u002Fa> Preview 以 100.0 的加權分數居首，G\u003Ca href=\"\u002Fnews\u002Fhow-to-add-temporal-rag-in-production-zh\">em\u003C\u002Fa>ini 3.1 Pro 以 93.9 緊追，GPT-5.3 Codex 在 SWE-bench Pro 上衝到 77.3，成為頁面上最高的開源權重相關結果。這些差距不是裝飾性的數字，而是能不能在真實倉庫裡修 bug、接 test、過 CI 的差別。\u003C\u002Fp>\u003Ch2>第一個論點：真實程式工作不是玩具題\u003C\u002Fh2>\u003Cp>BenchLM 把 SWE-bench Pro 和 LiveCodeBench 等權重看待，這個設計是對的。SWE-bench Pro 來自真實 GitHub issue，測的是模型能不能在混亂的 repository 裡把問題修掉；LiveCodeBench 則持續出新題，降低資料污染的風險。這兩者合在一起，才接近工程團隊真正需要的能力：能不能處理多檔案、能不能理解上下文、能不能在沒看過的題型上維持推理品質。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778670697069-56o7.png\" alt=\"為什麼程式碼基準測試終於開始說實話\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>HumanEval 已經明顯失去區分力。BenchLM 指出，前沿模型在這個基準上幾乎都超過 95%，也就是說它早就無法幫你分辨「夠用」和「真的能上線」。當一個測試大家都能過，它就不再是選型工具，只剩熟悉舊題庫的獎勵機制。若你的評估流程還把 HumanEval 放在核心位置，你其實是在優化過去的模型，而不是現在的產品。\u003C\u002Fp>\u003Ch2>第二個論點：排行榜開始反映品質、成本與部署的真實取捨\u003C\u002Fh2>\u003Cp>這份排行榜有價值的地方，在於它沒有假裝準確率是唯一指標。Claude Mythos Preview 雖然名列第一，但頁面也把更務實的選項攤開來：重視自架的團隊可以看 GPT-5.3 Codex，追求平衡的可以看 GPT-5.4，預算優先的則有像 Qwen3.6-27B 這類較便宜的開源模型。這才是正確的選型方式，因為團隊買的不是分數本身，而是能否在延遲、成本與可靠性之間守住門檻。\u003C\u002Fp>\u003Cp>數據也把這個取捨具體化了。\u003Ca href=\"\u002Fnews\u002Fgoogle-gemini-android-center-before-wwdc-zh\">Gemi\u003C\u002Fa>ni 3.1 Pro 標示的價格是每百萬 input token 2 美元、output token 12 美元，吞吐量 109 tokens\u002Fs，TTFT 為 29.71 秒；GPT-5.3 Codex 雖然在某些成本維度上不一定最便宜，但 88.7 的加權分數與 \u003Ca href=\"\u002Ftag\u002Fswe-bench-verified\">SWE-bench Verified\u003C\u002Fa> 的 85 分，已經把它和入門級模型拉開層級差距。BenchLM 也明講，5 分差距通常就足以區分一個能修複雜多檔案 bug 的模型，和一個會卡住的模型。在程式碼場景裡，這種差距不是四捨五入的誤差，而是一次失敗的 patch。\u003C\u002Fp>\u003Ch2>反方可能怎麼說\u003C\u002Fh2>\u003Cp>最強的反對意見其實很合理：別太相信任何排行榜。基準測試天生不完整，而程式碼尤其難測。模型可以在公開題庫上拿高分，卻在你的私有 monorepo 裡翻車，原因可能只是 build tool 很怪、測試很脆、或團隊慣例太特殊。批評者會說，leaderboard 很容易變成 \u003Ca href=\"\u002Ftag\u002Fbenchmark\">benchmark\u003C\u002Fa> tuning 的戰場，而不是產品價值的證明。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778670666585-ecc3.png\" alt=\"為什麼程式碼基準測試終於開始說實話\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>這個批評成立，但它否定的不是 BenchLM，而是「只看單一分數」的做法。BenchLM 自己其實已經承認限制：HumanEval 已經飽和，SWE-bench Verified 只是參考點，LiveCodeBench 才是更能抵抗污染的訊號。這\u003Ca href=\"\u002Fnews\u002Fwhy-ibm-bob-right-kind-ai-coding-assistant-zh\">才是對\u003C\u002Fa> benchmark 懷疑論最好的回應，不是崇拜排行榜，而是把它當篩選器，再回到自己的 repo 做驗證。你該拒絕的不是所有程式碼基準，而是把過時基準當成決策核心的習慣。\u003C\u002Fp>\u003Cp>所以我的結論很明確：不是基準測試沒用，而是只有少數基準還有用。LiveCodeBench 與 SWE-bench Pro 仍然能告訴你很多事，尤其是模型是否真的能處理真實工程工作；HumanEval 則已經太容易被刷高，不適合再主導選型。\u003C\u002Fp>\u003Ch2>你能做什麼\u003C\u002Fh2>\u003Cp>如果你是工程師，先用 SWE-bench Pro 和 LiveCodeBench 把候選模型縮到少數幾個，再拿你自己的 bug-fix、\u003Ca href=\"\u002Ftag\u002Fcode-review\">code review\u003C\u002Fa>、測試修補流程去跑；如果你是 PM，不要再問「哪個 coding model 最強」，而要問哪個模型在你的延遲、成本、部署條件下還能守住可靠性門檻；如果你是創辦人，把產品評估建立在真實 repo 工作上，而不是能討好簡報的舊題庫。最後你真正該追的，不是最好看的分數，而是最能在真實程式碼裡活下來的模型。\u003C\u002Fp>","BenchLM 的程式碼排行榜顯示，真正有用的訊號只剩 LiveCodeBench 與 SWE-bench Pro；HumanEval 已經不適合拿來選模型。","benchlm.ai","https:\u002F\u002Fbenchlm.ai\u002Fcoding",null,"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778670697069-56o7.png",[13,14,15,16,17],"BenchLM","LiveCodeBench","SWE-bench Pro","HumanEval","程式碼模型評估","zh",0,false,"2026-05-13T11:10:25.586869+00:00","2026-05-13T11:10:25.546+00:00","done","8bddc0a1-1f18-4e7d-9f52-e6932a6764db","why-coding-benchmarks-are-finally-telling-the-truth-zh","research","a5281bf5-661d-4288-b00e-0aa245e1fb03","published","2026-05-14T09:00:18.022+00:00",[31,32,33],"HumanEval 已經飽和，不適合再當程式碼模型的主要選型依據。","LiveCodeBench 與 SWE-bench Pro 更接近真實工程工作，能分辨模型是否真的能修 repo 裡的問題。","選模型不能只看分數，還要一起看成本、延遲與部署限制。","0c35a120-52fc-41fc-afa3-d404eb934158","[-0.031042362,0.02104377,0.013267175,-0.08454874,-0.0105339065,-0.017937573,-0.007892096,0.010248443,0.0022256617,0.0023792635,0.021360347,-0.021880766,0.02568012,-0.011496461,0.11715251,0.041016947,0.0044862833,0.023216885,0.0074334173,-0.022251254,-0.004815297,0.015275952,0.008748781,-0.0007394126,-0.010636257,-0.0043704803,-0.00931434,0.020914547,0.05347477,0.0132376775,-0.011670483,0.017267015,-0.0075181914,0.022541102,0.007171104,0.014721585,0.041226096,0.00043348942,0.009874156,0.0062056878,-0.0142984865,-0.016064499,0.02639965,-0.036308844,-0.014924094,0.032913193,0.0026260987,-0.024990713,-0.033198897,-0.010472264,-0.0056416085,0.031672314,-0.0024434815,-0.1510959,-0.009147489,0.010929474,0.008131503,0.011364569,-0.019197034,-0.0050534653,-0.02099045,-0.004702085,-0.03876386,-0.024994304,0.0102285985,-0.029429322,0.0032429544,-0.00015516672,0.000114931776,-0.0031763539,-0.025889812,0.0060630236,0.0064649973,-0.025579717,0.009194857,-0.0159997,0.010029703,0.00541181,-0.002831336,0.0095870495,-0.00032578903,0.002329086,0.0049683787,-0.0042740284,-0.0044178003,0.009667389,0.008115357,-0.0032759358,0.014009039,0.0023396167,-0.013809351,0.002453971,-0.005712424,-0.013392927,-0.008858969,-0.004125192,-0.042759903,-0.0007842422,0.006593126,-0.017128574,0.017455777,-0.005628131,0.007499524,0.0047496194,-0.0023197548,0.004025517,0.00260275,0.015634049,-0.0024529446,-0.0053068195,-0.01762735,-0.02026312,-0.04577675,0.0007133497,0.013244823,-0.13069203,-0.016546672,0.015408345,-0.0041743238,-0.0029036393,-0.018069176,0.012364397,-0.0048019146,0.045958705,-0.028553953,-0.01258707,0.0045272294,-0.010480017,-0.013786878,0.016851835,-0.046499185,0.019907542,0.00027525562,-0.028969599,-0.017425124,0.0020270874,0.022773618,-0.029679667,-0.015355563,-0.023364548,0.0009440052,0.014903411,-0.01293824,-0.010920158,-0.010694409,-0.019165486,-0.026969666,0.0074517746,-0.02145926,-0.009348084,0.019001685,-0.009751987,-0.014245992,-0.020391714,0.01907285,-0.025821643,0.028220177,0.03818485,0.026346916,0.015301341,-0.0050898707,-0.015233558,-0.009870974,-0.03292827,0.015375462,0.017848594,-0.0037109528,0.017896997,-0.004687405,0.032935817,0.006629901,0.009987369,-0.009055542,0.009473377,0.013846466,-0.028426722,-4.2643143e-05,-0.0051447838,0.004776201,0.027770303,0.02744902,-0.017833637,-0.0053586676,0.02185799,-0.027312914,-0.00095733715,0.019413395,0.0041902247,0.008345906,0.014911275,-0.025924623,0.00038086646,0.045071058,-0.033519477,0.012683191,-0.010580641,-0.002048662,3.1065287e-05,0.017644158,0.02790851,0.041273028,-0.0065855426,0.023444472,-0.0062858462,-0.0023723114,-0.006004615,0.00653884,-0.020052489,-0.021347642,0.010906442,-0.025101343,0.026045034,-0.012624899,-0.00059140136,-0.013732237,-0.020470852,-0.0072149,-0.017831136,0.013521722,-0.012143435,-0.007145826,-0.028638091,0.018990919,0.029603684,-0.038627896,-0.03144744,-0.0051404717,-0.031924173,-0.024723189,0.034985293,0.027441323,0.010205711,0.0056651556,-0.0071011386,0.013158826,-0.006230212,0.016997973,0.027497828,0.01536707,0.025101716,-0.012662951,0.014451501,-0.0027675165,0.014806756,0.012185972,0.004817552,0.0042302897,-0.0045549693,-0.015373748,0.01131736,0.005568774,0.014776593,0.025538264,-0.032556463,0.028279925,-0.006082261,-0.03053118,0.026394395,-0.0009592106,0.011850438,0.015802471,0.025247905,-0.020230336,-0.006806752,0.0121848835,0.009125509,0.022988895,-0.018608086,-0.030084182,0.018569065,-0.018644275,0.0023364597,0.004432123,0.004972008,0.008356719,0.008422087,-0.055168923,0.026428929,0.011915669,-0.0056766733,0.03894503,-0.007748835,-0.026284663,-0.0013238632,-0.022428073,-0.0038006485,-0.027238166,-0.018402362,0.009966948,-0.02074086,-0.002603667,0.010881832,-0.025096009,0.0119835185,-0.0351399,0.0012379495,0.016838498,-0.009069438,0.0029368273,-0.027368983,-0.0029070447,-0.00720626,0.02072576,0.044365045,-0.018418897,0.0039372114,0.002894568,0.01276839,-0.005335181,-0.015428409,-0.013301526,-0.008992978,0.006519743,-0.022994723,-0.015920026,0.016237438,0.008562082,-0.009273493,-0.016363751,-0.008415246,0.014382997,-0.021690108,-0.012242881,-0.0015186328,-0.008222027,-4.1856185e-07,0.023833383,-0.010840813,0.015479888,0.006745792,0.01862937,0.03186869,0.015196745,-0.014946511,0.01078158,-0.005212159,-0.01689464,-0.025523596,-0.021328831,-0.0153368125,-0.024067486,-0.0039475667,-0.016390886,-0.010734526,-0.0049201455,0.019606864,0.0034382746,0.00572361,-0.019162951,-0.040730245,0.025264306,-0.015241413,0.011509067,-0.006646436,-0.01334055,0.011000569,-0.01610454,-0.0019155272,0.01905288,0.011160449,0.014806165,0.013657967,-0.008570627,0.020251988,0.020687696,-0.029935108,0.013586315,-0.011121114,-0.017753463,-0.015117773,-0.0020752097,0.037651926,-0.0073484196,-0.008351745,-0.00643127,0.0027775269,-0.012088062,0.018185684,0.017805975,0.008127954,0.017540649,0.023876812,-0.018480426,-0.0018526017,-0.017573256,-0.00891116,0.024951875,0.020924255,-0.017445283,0.02037713,0.035173267,-0.007730421,-0.0036050063,0.009669387,0.0029659611,0.024753349,-0.01989772,0.012379853,-0.0018507236,-0.0011604666,-0.009570408,-0.014978456,0.022304598,0.01558567,-0.017351924,0.026268186,-0.018462945,0.026238587,0.01662105,0.01961988,0.008018862,0.025515668,0.013734566,0.020414317,0.010220167,0.01142859,0.011202315,-0.0003496343,0.019682497,0.0007522315,-0.01855469,0.048707686,0.017086517,0.0012729955,-0.012906577,-0.0047077686,-0.023228701,-0.011241341,0.0097656455,-0.023729164,0.011625419,-0.0057718763,0.010844007,-0.01523865,-0.042912517,-0.0147999,-0.028427858,0.018533988,-0.04286956,-0.030229196,0.007902928,0.004631681,-0.0023734856,-0.0116925,0.011487221,-0.014175147,0.0029178665,-0.028024908,-0.039126847,-0.0051498925,0.034899928,0.022366876,0.04641271,0.0035507611,0.011767307,-0.008157234,0.0039990065,0.0010648244,-0.023901176,-0.022696,0.006742125,-0.003642045,0.005759779,0.04441689,-0.011082724,-0.0038970057,-0.011241613,-0.0073371707,-0.036867425,0.003042415,-0.016672816,-0.0026793084,-0.0015132973,0.03402346,-0.027634196,0.034666415,0.0152739575,0.0039403094,0.01604234,-0.0027620986,0.017284269,-0.013362279,0.028271979,-0.034308467,0.0074109584,0.015049625,-0.003988287,0.0058414023,-0.015617658,0.0046421145,0.057417244,0.02705171,0.008865874,-0.002124343,-0.016630238,-0.010801141,-0.028847191,-0.028946199,0.015184979,-0.013050945,0.008912295,-0.0014494224,0.0076591973,0.001857995,-0.014486767,0.0059927106,-0.0099054715,0.0033845657,-0.014401696,-0.004067382,-0.007887865,0.006576812,0.03529663,-0.0020351477,-0.01899408,-0.002429288,-0.0010675136,0.0021044924,-0.015817244,-0.021312049,-0.01713579,-0.01059683,-0.020957509,0.0064832075,-0.0067441375,-0.01025203,0.030785223,0.011453597,0.031769708,0.03851703,-0.004205136,0.0200601,0.0033574142,0.012038972,0.009096414,-0.0027418707,0.015872398,0.015738033,-0.02075834,0.007768636,-0.029314807,-0.0082064355,-0.01748964,-0.010967738,0.030140541,-0.0887017,-0.009049214,0.0029347474,-0.017399624,-0.014614985,-0.005380872,0.013139077,-0.031279676,-0.004423345,0.014739276,0.010728541,-0.010808295,-0.0036948076,0.019670732,-0.0037062753,-0.011598442,-0.01858133,0.0002229278,0.0147408,-0.014852092,0.005004666,0.011636451,0.020493582,0.002365744,-0.00371242,0.0053614103,0.018594984,-0.00091214385,0.01038229,-0.01567109,-0.024748292,0.013519694,0.016256267,0.033483777,0.013444705,-0.00490242,0.009019282,-0.00593003,0.0023184363,-0.00034292816,0.0050870473,0.005209262,0.0002192878,-0.027169246,-0.011022753,-0.002882119,0.003153045,0.007041925,0.028249439,-0.0020879507,-0.032785118,-0.025504975,0.01151957,-0.029494448,-0.001503209,-0.024145441,-0.01979815,-0.02218851,-0.014337197,-0.0070956675,-0.018470965,-0.005896248,0.003930647,0.027720533,-0.02393492,0.009806487,0.0022181715,0.031924333,0.0015130235,0.01601754,0.0043802927,-0.034665354,-0.0059119593,0.021060284,0.0012596891,0.003614995,0.0067153373,0.01004693,-0.017534032,-0.00471902,-0.027517648,-0.016686106,-0.082022466,0.011539186,-0.025025053,-0.0030647381,-0.0046421,-0.023651721,0.028710527,-0.033215903,-0.0233516,0.0012937615,-0.010231686,0.012383352,0.011852709,-0.011618157,0.008102621,-0.003277181,0.01840255,0.001401238,0.014606678,-0.02074679,0.0030329798,0.010894761,0.0049868664,-0.026897265,-0.024547206,0.031390347,-0.007626125,-0.004892472,-0.02188095,-0.0057008923,-0.0015281966,-0.13050091,-0.03316527,-0.0078034583,-0.015108618,0.005210735,0.023337243,0.021345004,0.0016982473,0.0031792587,0.017693862,-0.017126784,-0.009361119,-0.017923506,-0.012546117,-0.004023574,0.1142867,-0.0122689195,0.0032761297,-0.049601573,-0.017040636,0.0017784731,-0.03454981,-0.021377096,0.011023772,-0.00035457016,-0.0053187027,0.04391993,-0.010248187,-0.01803224,0.012092056,0.030338483,-0.003099797,-0.010471644,-0.02946585,0.006259156,-0.0069338977,-0.01703436,-0.029168501,0.0053449473,0.0048435666,0.013547657,-0.0030847134,0.023310525,0.0024849623,-0.028910926,0.006368998,-0.0057689426,-0.025831845,-0.007902329,0.013582854,-0.025876628,-0.06616178,-0.0038532177,-0.014860747,-0.012900692,0.011324469,-0.010704585,-0.0027125478,0.006028294,0.012475812,0.01855449,-0.010490875,0.013155085,0.027388372,-0.03765343,0.0038443604,0.043994542,0.024616007,-0.0042853793,0.0004410842,-0.003054732,0.007912091,-0.015495637,-0.0062949737,0.0020913833,-0.017012153,-0.0030875457,0.034067128,0.015654646,-0.002898418,0.0041605453,0.016809816,0.007888825,-0.02568017,0.0040411786,-0.0015728961,0.012786287,-0.0006858591,-0.0021255063,-0.03363621,0.009099662,0.015786402,-0.0063819443,0.0003652328,0.0034116693,0.02995451,0.0061123297,0.011586,0.007173324,-0.0010384027,-0.002732646,-0.00295051,0.033854038,0.003317229,0.029220881,0.020507967,0.00486234,0.008082723,0.01858146,0.02233075]",[37,38,40,42,44],{"name":17,"slug":17},{"name":15,"slug":39},"swe-bench-pro",{"name":13,"slug":41},"benchlm",{"name":14,"slug":43},"livecodebench",{"name":16,"slug":45},"humaneval",{"id":27,"slug":47,"title":48,"language":49},"why-coding-benchmarks-are-finally-telling-the-truth-en","Why coding benchmarks are finally telling the truth","en",[51,57,63,69,75,81],{"id":52,"slug":53,"title":54,"cover_image":55,"image_url":55,"created_at":56,"category":26},"667b72b6-e821-4d68-80a1-e03340bc85f1","turboquant-seo-shift-small-sites-zh","TurboQuant 與小站 SEO 變化","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778840440690-kcw9.png","2026-05-15T10:20:27.319472+00:00",{"id":58,"slug":59,"title":60,"cover_image":61,"image_url":61,"created_at":62,"category":26},"381fb6c6-6da7-4444-831f-8c5eed8d685c","turboquant-vllm-comparison-fp8-kv-cache-zh","TurboQuant 與 FP8 實測結果","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778839867551-4v9g.png","2026-05-15T10:10:36.034569+00:00",{"id":64,"slug":65,"title":66,"cover_image":67,"image_url":67,"created_at":68,"category":26},"c15f45ee-a548-4dbf-8152-91de159c1a11","llmbda-calculus-agent-safety-rules-zh","LLMbda 演算替 AI 代理人立安全規則","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778825503412-mlbf.png","2026-05-15T06:10:34.832664+00:00",{"id":70,"slug":71,"title":72,"cover_image":73,"image_url":73,"created_at":74,"category":26},"0c02225c-d6ff-44f8-bc92-884c8921c4a3","low-complexity-beamspace-denoiser-mmwave-mimo-zh","更簡單的毫米波波束域去噪器","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778814650361-xtc2.png","2026-05-15T03:10:30.06639+00:00",{"id":76,"slug":77,"title":78,"cover_image":79,"image_url":79,"created_at":80,"category":26},"9d27f967-62cc-433f-8cdb-9300937ade13","ai-benchmark-wins-cyber-scare-defenders-zh","為什麼 AI 基準賽在資安領域的勝利，應該讓防守方警醒","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778807450006-nofx.png","2026-05-15T01:10:29.379041+00:00",{"id":82,"slug":83,"title":84,"cover_image":85,"image_url":85,"created_at":86,"category":26},"bc402dc6-5da6-46fc-9d66-d09cb215f72b","why-linux-security-needs-patch-wave-mindset-zh","為什麼 Linux 安全需要「補丁浪潮」思維","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778741449813-s2wn.png","2026-05-14T06:50:24.052583+00:00",[88,93,98,103,108,113,118,123,128,133],{"id":89,"slug":90,"title":91,"created_at":92},"f18dbadb-8c59-4723-84a4-6ad22746c77a","deepmind-bets-on-continuous-learning-ai-2026-zh","DeepMind 押注 2026 連續學習 AI","2026-03-26T08:16:02.367355+00:00",{"id":94,"slug":95,"title":96,"created_at":97},"f4a106cb-02a6-4508-8f39-9720a0a93cee","ml-papers-of-the-week-github-research-desk-zh","每週 ML 論文清單，為何紅到 GitHub","2026-03-27T01:11:39.284175+00:00",{"id":99,"slug":100,"title":101,"created_at":102},"c4f807ca-4e5f-47f1-a48c-961cf3fc44dc","ai-ml-conferences-to-watch-in-2026-zh","2026 AI 研討會投稿時程整理","2026-03-27T01:51:53.874432+00:00",{"id":104,"slug":105,"title":106,"created_at":107},"9f50561b-aebd-46ba-94a8-363198aa7091","openclaw-agents-manipulated-self-sabotage-zh","OpenClaw Agent 會自己搞砸自己","2026-03-28T03:03:18.786425+00:00",{"id":109,"slug":110,"title":111,"created_at":112},"11f22e92-7066-4978-a544-31f5f2156ec6","vega-learning-to-drive-with-natural-language-instructions-zh","Vega：使用自然語言指示進行自駕車控制","2026-03-28T14:54:04.847912+00:00",{"id":114,"slug":115,"title":116,"created_at":117},"a4c7cfec-8d0e-4fec-93cf-1b9699a530b8","drive-my-way-en-zh","Drive My Way：個性化自駕車風格的實現","2026-03-28T14:54:26.207495+00:00",{"id":119,"slug":120,"title":121,"created_at":122},"dec02f89-fd39-41ba-8e4d-11ede93a536d","training-knowledge-bases-with-writeback-rag-zh","用 WriteBack-RAG 強化知識庫提升檢索效能","2026-03-28T14:54:45.775606+00:00",{"id":124,"slug":125,"title":126,"created_at":127},"3886be5c-a137-40cc-b9e2-0bf18430c002","packforcing-efficient-long-video-generation-method-zh","PackForcing：短影片訓練也能生成長影片","2026-03-28T14:55:02.688141+00:00",{"id":129,"slug":130,"title":131,"created_at":132},"72b90667-d930-4cc9-8ced-aaa0f8968d44","pixelsmile-toward-fine-grained-facial-expression-editing-zh","PixelSmile：提升精細臉部表情編輯的新方法","2026-03-28T14:55:20.678181+00:00",{"id":134,"slug":135,"title":136,"created_at":137},"cf046742-efb2-4753-aef9-caed5da5e32e","adaptive-block-scaled-data-types-zh","IF4：神經網路量化的聰明選擇","2026-03-31T06:00:36.990273+00:00"]