[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"article-taming-black-box-llm-inference-scheduling-zh":3,"tags-taming-black-box-llm-inference-scheduling-zh":36,"related-lang-taming-black-box-llm-inference-scheduling-zh":46,"related-posts-taming-black-box-llm-inference-scheduling-zh":50,"series-research-941f698a-1dcf-4807-bd56-5295c07d2dee":87},{"id":4,"title":5,"content":6,"summary":7,"source":8,"source_url":9,"author":10,"image_url":11,"keywords":12,"language":18,"translated_content":10,"views":19,"is_premium":20,"created_at":21,"updated_at":21,"cover_image":11,"published_at":22,"rewrite_status":23,"rewrite_error":10,"rewritten_from_id":24,"slug":25,"category":26,"related_article_id":27,"status":28,"google_indexed_at":29,"x_posted_at":10,"tweet_text":10,"title_rewritten_at":10,"title_original":10,"key_takeaways":30,"topic_cluster_id":34,"embedding":35,"is_canonical_seed":20},"941f698a-1dcf-4807-bd56-5295c07d2dee","黑箱 LLM 排程更聰明了","\u003Cp data-speakable=\"summary\">這篇在講怎麼用預測輸出長度，改善黑箱 \u003Ca href=\"\u002Ftag\u002Fllm\">LLM\u003C\u002Fa> 推論排程。\u003C\u002Fp>\u003Cp>\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2604.06970\">Scheduling the Unschedulable: Taming Black-Box LLM Inference at Scale\u003C\u002Fa> 盯上的，是一個很實際的伺服器痛點：LLM 不是固定長度回應，送進來之後，系統常常不知道它到底會生成多久。對排程器來說，這就像要在半盲狀態下分配資源。當請求量一大，吞吐、延遲、批次化策略都會被這種不確定性拖住。\u003C\u002Fp>\u003Cp>這篇論文的切入點不是改模型本身，而是改推論服務層。作者想處理的是黑箱 LLM inference，也就是營運方看不到模型內部細節、也不一定能拿到完整 runtime 訊號的情境。這種情況下，傳統「邊跑邊看」的排程方式會很被動，因為真正的生成長度，要等 decode 進行後才知道。\u003C\u002Fp>\u003Ch2>這篇論文要解什麼痛點\u003C\u002Fh2>\u003Cp>LLM 推論跟一般 \u003Ca href=\"\u002Ftag\u002Fapi\">API\u003C\u002Fa> 很不一樣。一般固定回應長度的服務，系統比較容易估算成本。但 LLM 每個 request 的輸出長度差異很大，有的很快結束，有的會一路生成很久。只要排程器看不準，短請求就可能被長請求卡在後面，形成 head-of-line blocking，使用者感受到的就是「明明有資源，卻還是慢」。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778740253221-wgy6.png\" alt=\"黑箱 LLM 排程更聰明了\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>這個問題在黑箱場景更明顯。因為你不一定知道模型內部怎麼運作，也不一定能直接拿到足夠細的執行資訊。結果就是，系統要在資訊不足的前提下做決策。論文把這個狀況描述成接近「無法排程」的問題，重點不是完全不能做，而是很難用傳統方式做得好。\u003C\u002Fp>\u003Cp>對開發者來說，這不是抽象的研究議題。只要你在做共享推論基礎設施、多租戶 API、或是要讓不同長度的請求共存，這個問題就會出現。排程器猜錯一次，後面就會連鎖影響整體體感。\u003C\u002Fp>\u003Ch2>方法到底怎麼運作\u003C\u002Fh2>\u003Cp>論文的核心假設很直接：在 request 送進來的當下，就能預測它大概會輸出多少 \u003Ca href=\"\u002Ftag\u002Ftoken\">token\u003C\u002Fa>。只要有這個估計值，排程器就不必把每個請求都當成同樣模糊的黑盒子，而是可以先知道哪些 request 可能吃掉比較多計算資源。\u003C\u002Fp>\u003Cp>有了這個訊號，排程器就能在真正開始執行前，先做比較好的決策。它可以調整 queue 順序，也可以影響資源分配與批次處理方式。重點不是等模型跑到一半才發現「這個 request 很長」，而是把這件事提前到提交時就納入考量。論文想做的，是把這種預先知道一點點的資訊，變成比較可控的 inference pipeline。\u003C\u002Fp>\u003Cp>這裡要注意，方法並不是改變模型結構，也不是讓黑箱變成白箱。它是在 serving layer 上動手，讓排程器更聰明。對很多實務團隊來說，這反而是更可行的方向，因為他們能改的是服務層，而不是模型本體。\u003C\u002Fp>\u003Cp>換句話說，這篇不是在追求完美預測，而是在利用「足夠早的粗略預測」來減少排隊摩擦。它處理的是可操作性，不是幻想把不確定性完全消掉。\u003C\u002Fp>\u003Ch2>論文實際證明了什麼\u003C\u002Fh2>\u003Cp>就這份 raw 資料來看，摘要沒有公開完整 b\u003Ca href=\"\u002Fnews\u002Faisafetybenchexplorer-ai-safety-benchmarks-zh\">ench\u003C\u002Fa>mark 細節，也沒有提供數字型結果。也就是說，這裡看不到 l\u003Ca href=\"\u002Fnews\u002Fanthropic-cat-wu-proactive-ai-assistants-zh\">at\u003C\u002Fa>ency、throughput、成本或其他量化指標，沒辦法直接用數據比較它到底贏了多少。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778740256836-fy7v.png\" alt=\"黑箱 LLM 排程更聰明了\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>不過，從摘要能確定的是，作者主張「在提交時預測輸出長度」本身，就足以改善黑箱 LLM inference 的排程決策。這代表論文的貢獻比較偏系統設計與可行性論證，而不是提出一個新的模型架構或訓練方法。\u003C\u002Fp>\u003Cp>也因為摘要資訊有限，目前還看不出幾個實作上很關鍵的細節：預測輸出長度的方法是\u003Ca href=\"\u002Fnews\u002Fwhy-claude-for-legal-will-reset-legal-tech-stack-zh\">什麼\u003C\u002Fa>、準確度如何、不同工作負載下是否穩定、以及它對排程改善的幅度有多大。這些都會直接影響實際部署價值，但 raw 資料沒有展開。\u003C\u002Fp>\u003Cp>所以，若只根據目前可見內容，這篇論文最重要的訊息不是「我已經證明大幅加速」，而是「黑箱推論也能透過提前預測，變得比較能排」。這種論點對系統研究很常見，但真正落地時，還是得看預測品質與排程策略能不能配合。\u003C\u002Fp>\u003Ch2>對開發者有什麼影響\u003C\u002Fh2>\u003Cp>如果你在做 LLM 服務，這篇的方向很值得注意。因為 inference scheduling 本來就是最容易被忽略、但又最容易影響體感的地方。只要能減少長請求把短請求壓住的情況，使用者就會覺得系統更快、更穩。\u003C\u002Fp>\u003Cp>這也反映出一個更大的趨勢：黑箱 LLM 的使用越來越多，服務端常常只能依賴有限觀測來做優化。既然看不到模型內部，那就只能想辦法從可見訊號下手。預測輸出長度，就是一種很務實的訊號利用方式。\u003C\u002Fp>\u003Cp>對實作來說，這種方法可能特別適合以下情境：\u003C\u002Fp>\u003Cul>\u003Cli>request 長度差異很大，排隊行為明顯受長輸出影響。\u003C\u002Fli>\u003Cli>多租戶共享推論資源，需要控制 head-of-line blocking。\u003C\u002Fli>\u003Cli>模型是黑箱，服務層拿不到足夠細的內部狀態。\u003C\u002Fli>\u003Cli>系統願意接受「粗估」來換取更好的排程決策。\u003C\u002Fli>\u003C\u002Ful>\u003Cp>但它也不是萬靈丹。只要輸出長度預測不準，排程器就還是會做出偏差決策。這篇論文目前能支持的結論，是「有這個訊號會比完全沒訊號好」，不是「任何場景都能穩定解決」。\u003C\u002Fp>\u003Cp>另外，真實部署還會碰到 burst traffic、混合工作負載、以及延遲和吞吐之間的取捨。摘要沒有說明作者是否已經處理這些問題，所以它更像是一個值得追蹤的系統方向，而不是可以直接照抄的生產方案。\u003C\u002Fp>\u003Ch2>限制與還沒回答的問題\u003C\u002Fh2>\u003Cp>這份摘要最大的限制，就是資訊太少。沒有公開 benchmark 數字，就很難判斷改善幅度，也沒辦法知道這方法在哪些負載下表現最好。對工程師來說，這會直接影響採用意願，因為排程優化通常非常吃場景。\u003C\u002Fp>\u003Cp>第二個問題是預測本身。整個方法的前提，是在 request 開始前就能估出輸出 token 數。如果這個估計誤差太大，排程器雖然不再完全盲飛，但還是可能做錯資源配置。換句話說，方法的上限，很大程度取決於預測的品質。\u003C\u002Fp>\u003Cp>第三個問題是公平性與系統整合。就算這個策略在某些場景有用，實際服務還要考慮不同使用者、不同類型請求之間的公平分配，以及既有 serving stack 能不能接得上。摘要沒有交代這些細節，所以目前還不能把它當成成熟方案看待。\u003C\u002Fp>\u003Cp>但從研究角度來看，這篇確實抓到一個很真實的痛點：在黑箱 LLM 服務裡，哪怕只多知道一個 request 的特性，也可能讓排程器少走很多冤枉路。它不是要把問題變簡單，而是要讓原本幾乎看不見的排程，變得稍微可控一點。\u003C\u002Fp>\u003Cp>對台灣做 LLM infra、API gateway、或多租戶推論服務的團隊來說，這種思路很有參考價值。因為很多時候，真正能優化的不是模型，而是你怎麼在模型外面安排它。這篇論文談的，就是那一層最容易被忽略、但影響很大的地方。\u003C\u002Fp>","這篇論文用「預測輸出長度」來改善黑箱 LLM 推論排程，想在看不到模型內部的情況下，減少排隊摩擦、提升大規模服務效率。","arxiv.org","https:\u002F\u002Farxiv.org\u002Fabs\u002F2604.06970",null,"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778740253221-wgy6.png",[13,14,15,16,17],"LLM inference","scheduling","black-box model","output length prediction","serving infrastructure","zh",1,false,"2026-05-14T06:30:31.546746+00:00","2026-05-14T06:30:31.423+00:00","done","d835041b-e901-41e7-8619-a24e7d06d8d5","taming-black-box-llm-inference-scheduling-zh","research","407ca117-f24b-4ff9-96b8-09d4d4733b31","published","2026-05-14T09:00:16.963+00:00",[31,32,33],"黑箱 LLM 推論的難點，在於系統常常不知道每個 request 會生成多長。","這篇論文主張用提交時的輸出長度預測，來改善排程與資源分配。","摘要沒有公開完整 benchmark 數字，因此目前只能確認方法方向，不能判定實際提升幅度。","0c35a120-52fc-41fc-afa3-d404eb934158","[-0.00019745932,0.0074997973,0.028721549,-0.08625974,-0.01841312,0.0015328996,-0.023089219,0.0068189595,0.010911656,-0.0049127424,0.02857393,-0.01923826,0.0072214077,0.014380037,0.12425366,0.039594818,0.00929726,0.013336617,0.012609115,-0.018561779,5.9800823e-05,0.01372731,-0.010382125,0.0056571714,0.008218571,0.014205815,0.023432128,0.0066764737,0.046742037,-0.0050728335,-0.016854765,0.03513996,0.014365246,0.02564777,-0.0075734225,0.037242714,0.022945985,-0.030399231,0.010644317,0.021246951,-0.02855294,-0.01612757,0.02705973,-0.0056889527,-0.041615028,0.012278777,-0.008510274,-0.047715638,-0.009425984,0.024092358,0.007383642,0.026402354,0.0046284343,-0.14273798,0.0024879372,-0.011588324,-0.023174763,0.0153360255,0.01818666,-0.006676759,-0.038240038,0.03192211,-0.004193864,-0.007072831,-0.014439921,-0.021020556,0.017641732,0.013806879,-0.015710281,-0.016079905,-0.018969469,-0.019640144,0.0023164968,-0.026333544,0.015000479,-0.02418658,0.01297251,0.028269239,-0.009729188,-0.0013098071,0.02702386,-0.024913205,-0.012968208,0.010399046,-0.008078428,-0.0012894489,0.011846533,0.01654889,0.01532464,0.0042715683,0.0011524673,0.017652383,0.0073798387,0.017724192,0.008893468,-0.003776579,-0.006227461,-0.005193336,0.0045580347,-0.036775257,-0.017161664,-0.007488842,0.004527575,-0.001782705,0.00157433,-0.015270697,0.0004431201,-0.005301347,-0.00478364,0.00999887,-0.0055675944,-0.019364422,0.0124621,-0.0058562406,-0.01205735,-0.1235168,-0.0037867206,-0.008533201,-0.006198437,-0.00084129337,-0.0025088917,0.013671665,0.00881112,0.017319258,-0.011130287,-0.009861587,0.021527601,-0.01624869,-0.0071004676,-0.0044221804,-0.0018901291,-0.0039287205,-0.0022461456,0.021834284,-0.017427463,0.021547098,0.028476954,-0.03696732,-0.015699835,-0.006786789,-0.01805824,0.028578334,0.00515626,-0.010750714,-0.016087553,-0.023478145,-0.032063093,-0.0037067146,-0.012192674,-0.0032626747,0.028171446,-0.010473277,-0.0063549206,-0.012208874,0.01213828,-0.026338104,0.0044589634,0.006021056,0.018500552,0.011508709,0.012466507,0.014746145,-0.008946088,-0.003965029,-0.030353693,0.023075977,0.011304348,-0.0009792639,-0.015246706,0.008509213,-0.007070149,-0.011213471,-0.00680363,0.0036318416,0.015904378,-0.006197353,-0.03166681,-0.009358211,0.0016750483,-0.022960264,0.00023643747,0.012971037,-0.025312912,0.029433308,0.006848781,0.030535076,0.0036699956,0.025217364,0.02983876,0.040687524,-0.015999135,0.0030842766,-0.00046130246,-0.021777872,-0.0099704135,-0.032939937,-0.0030965148,-0.012960021,-0.0041180053,0.013365135,-0.018260613,0.022636881,0.042702276,-0.037271053,0.00015710585,-0.022468409,0.0011602929,-0.019577064,0.0071975617,0.007849784,-0.00012470158,0.015521396,0.011181417,0.0030426518,0.023802351,-0.0083374595,-0.014240246,0.008460442,0.02533221,-0.005192727,0.014228864,-0.020014366,0.0028831835,0.020695772,-0.007236094,0.0070779156,-0.0003485314,0.0066499556,-0.0065360735,0.019808492,0.004517004,2.459594e-05,0.0026544896,0.019544763,0.0022549657,0.020600017,0.0033573704,0.022210462,0.028354742,0.008657348,-0.010243053,0.03729972,0.0074456437,-0.0057816687,0.024251973,-0.00515324,0.027799746,-0.01231604,-0.0003636674,0.00043372958,0.015787797,-0.01078,-0.00649452,-0.005011121,0.038298387,0.0039931494,-0.02404076,-0.009084926,-0.017491952,0.0061530517,-0.01173985,-0.015312853,0.0011485114,0.00047230406,0.030571768,0.008246476,0.011967734,0.007989662,-0.024427867,-0.031639364,0.0005124549,0.019101748,0.014497474,-0.022280706,0.025135672,-0.00868726,-0.040190093,0.033264324,0.0029806937,-0.005409163,0.013702651,0.0261384,0.01993521,0.009993487,-0.017346624,-0.0060269875,-0.032649614,-0.0132136345,-0.022519711,-0.018885694,0.007104154,0.012398962,-0.0076451898,-0.008267802,0.0057863235,-0.016413936,-0.003362971,-0.007804376,0.0041431487,0.011192452,0.0070487177,-0.0015691896,-0.0052819513,0.03294892,-0.018084532,-0.0026200013,0.008756086,0.028764576,-0.026210176,-0.013618106,-0.007907735,-0.019623874,0.023840358,-0.000895878,0.0027624818,-0.021629706,0.0074398792,-0.00849149,0.017827298,-0.027716089,0.0039794785,-0.026890524,0.0011915036,0.018994017,-0.01735714,0.0033246272,-0.0008650247,0.0062425174,0.029668726,-0.017925946,-0.041196447,0.008158693,0.005089485,-0.02750538,0.015054024,-0.004910809,-0.0046213353,0.0030364844,-0.013323782,-0.011102551,0.003142236,-0.007659889,-0.007371015,0.008022978,-0.04497714,0.023973783,0.015764078,0.021609934,-0.012783545,-0.04593454,0.01976277,0.00804542,0.012669153,-0.014086822,-0.028595757,0.005055032,0.004669323,-0.00071881304,0.018684607,-0.0077133467,0.0011091776,0.006016046,0.0059763584,0.0035857218,0.025114965,-0.02885006,-0.005949266,0.014295616,0.0027242133,-0.01238722,-0.0046785893,0.014160356,-0.0028216687,0.01684248,0.01872648,0.008188669,0.005639321,0.012810272,-0.00855028,-0.004568679,-0.021535572,0.027863959,-0.02086671,-0.007924219,-0.007423444,8.694006e-05,0.011538875,-0.001454642,0.0035285028,0.0060031526,-0.0013733361,0.029606678,0.02364946,0.009347374,-0.013807557,-0.008445976,-0.0139478,0.022912167,-0.013748667,-0.012260416,0.010964884,0.0033289348,0.010079921,0.0065893056,0.003788653,-0.018947223,-0.012105072,0.009751866,0.022122946,-0.019283263,0.010120745,0.0038955747,-0.007873676,0.02324702,0.017047174,-0.00887869,-0.017478144,-0.026248958,0.027437458,0.008727745,-0.033957954,0.002116341,-0.023897259,-0.007614786,-0.003811213,-0.011014808,-0.035266265,-0.03302851,-0.01044282,-0.03199436,0.014849242,-0.01830637,-0.029936213,-0.022510342,-0.016782742,-0.0106657185,-0.013601135,-0.009678208,-0.014247986,-0.0424224,0.0034521758,-0.002680185,-0.004751299,-0.011369364,-0.015704073,-0.02081912,-0.00875336,-0.006076971,-0.008874287,0.0082597025,0.056300577,0.0070041274,0.011003507,-0.030561715,0.024338568,-0.015630241,0.016337046,0.0055497745,-0.019071078,0.0010617297,-0.012103262,-0.030091377,0.009935231,0.019456618,0.009630951,-0.026908653,-0.029670645,-0.013743708,0.019076405,0.029249148,-0.00015800116,0.012050497,0.008264668,0.0019642035,-0.013668676,0.004560369,0.020391697,0.008676309,0.021886764,-0.003146478,-0.01248471,-0.026572714,0.013220834,-0.014589411,-0.009454307,0.015381889,-0.013565046,0.010189229,-0.025323855,-0.0070194523,0.0052289325,0.036080223,-0.003973061,0.015828056,-0.0090626385,-0.009696734,-0.020493243,-0.023565404,0.00049158337,-0.0066050054,-0.008055864,-0.009931368,0.015120582,-0.031549733,0.012819179,-0.0028046684,-0.017409075,0.029816639,-0.010808574,-0.051661015,-0.019948635,0.019360155,0.027612617,0.0092031555,0.00068005954,-0.0036080936,-0.009388671,-0.0018077247,-0.020601857,-0.01872408,0.013681917,-0.042930227,0.007522769,0.04587798,-0.01362984,-0.007058012,-0.0019045813,0.009080858,0.014229406,0.039517745,0.0018178668,0.034903824,0.007192703,-0.016762491,0.018661922,-0.0105817625,8.095064e-05,0.0081096785,0.017728016,0.0026572128,-0.023237105,0.0050515593,-0.011880753,-0.023990415,0.033591837,-0.10841768,0.0064464756,-0.004689505,-0.00453604,0.022208974,-0.016499965,-0.0045807385,-0.032450028,-0.008233354,0.0026489366,-0.0025965073,0.004585489,0.04126196,0.037046723,0.009570613,-0.02130043,-0.029283576,-0.0067317826,0.015877448,-0.006112238,0.03100469,0.026775144,0.012325159,0.021920558,-0.008999861,-0.008359601,0.020137599,0.025903767,0.00623833,0.010692834,-0.021435853,-0.036501348,0.007871337,0.019463897,0.029473651,-0.007247306,0.038792238,-0.0009182565,0.010199882,0.017247977,0.023311857,0.018201528,-0.018588522,-0.011402541,0.02086249,-0.012122993,-0.0030892289,0.0058284756,0.020908777,0.01853875,-0.03568529,-0.024045559,-0.017994966,0.0011213536,-0.0032055094,-0.013902897,-0.010540298,0.010768055,-0.010791825,0.026488261,-0.02115487,-0.0026065365,-0.0270307,0.019343449,0.00047726612,0.006737292,-0.0050083986,0.04134352,-0.007492358,0.0043889834,-0.019789869,-0.027873453,0.028996142,0.025500715,-0.036705166,0.012985195,-0.010647396,0.007848294,0.0026436446,-0.0038934946,-0.017678348,-0.019127091,-0.0700751,-0.0100249415,-0.03154473,-0.018256597,0.009251923,-0.009870677,-0.012532804,-0.027887486,-0.00070865406,-0.027252913,-0.003494306,0.009995791,0.02572965,-0.018945748,-0.00035577596,-0.00865236,-0.0024140729,0.0031104963,0.0074453084,-0.04539873,-0.001950529,0.00017717441,0.033086307,0.00012885031,-0.02400937,-0.022210233,0.010201515,0.005988307,0.007714777,-0.022850294,-0.014176831,-0.14128895,-0.014846638,0.00076548656,-0.014837684,0.010254038,0.00215539,-0.027305592,-0.01157124,-0.023483044,-0.010172191,-0.0147000225,-0.025739398,0.013369307,-0.01124968,-0.024631245,0.11487705,-0.00038146856,0.0036140908,-0.032482266,-0.028547628,0.02007217,-0.010972697,0.0026256249,0.013003038,0.021574333,0.009926458,0.00880392,-0.012939846,-0.0013168801,0.051773842,0.015141218,0.018911108,0.007850615,-0.0078365775,0.010076622,-0.013697552,-0.0007328425,-0.014050025,0.00045112605,0.001251724,0.013148398,-0.00077697186,-0.011940023,0.014124174,0.007878459,-0.0049830778,-0.0336396,-0.011219817,0.011489874,0.0063762353,-0.007283529,-0.06761332,-0.008621323,-0.011113958,0.034220047,-0.000857073,-0.027731156,0.028940238,0.019734466,-0.005210252,0.027843956,0.0028854127,0.019995509,0.0005660304,-0.013881193,-0.006917459,0.021817168,0.029566472,0.0054693953,0.022589406,-0.008637779,0.007822586,-0.0017924175,-0.022126539,0.0114760585,-0.011704268,0.011954113,0.036160305,0.018959336,-0.015659897,0.008895799,-0.0036095248,-0.004261959,-0.0055206167,0.0130496435,-0.0045520975,0.011576582,-0.0055198097,-0.0117779365,-0.008551601,-0.013067508,0.0152673945,0.025514634,-0.009779281,-0.005139516,0.025160585,-0.0464219,0.018525492,-0.017158313,-0.005521683,-0.00068553875,-0.04553469,0.016399132,-0.014405992,0.034244806,0.013338604,-0.015400111,-0.004064847,0.0038866608,-0.010584072]",[37,39,41,43,45],{"name":17,"slug":38},"serving-infrastructure",{"name":15,"slug":40},"black-box-model",{"name":16,"slug":42},"output-length-prediction",{"name":13,"slug":44},"llm-inference",{"name":14,"slug":14},{"id":27,"slug":47,"title":48,"language":49},"taming-black-box-llm-inference-scheduling-en","Taming Black-Box LLM Inference Scheduling","en",[51,57,63,69,75,81],{"id":52,"slug":53,"title":54,"cover_image":55,"image_url":55,"created_at":56,"category":26},"667b72b6-e821-4d68-80a1-e03340bc85f1","turboquant-seo-shift-small-sites-zh","TurboQuant 與小站 SEO 變化","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778840440690-kcw9.png","2026-05-15T10:20:27.319472+00:00",{"id":58,"slug":59,"title":60,"cover_image":61,"image_url":61,"created_at":62,"category":26},"381fb6c6-6da7-4444-831f-8c5eed8d685c","turboquant-vllm-comparison-fp8-kv-cache-zh","TurboQuant 與 FP8 實測結果","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778839867551-4v9g.png","2026-05-15T10:10:36.034569+00:00",{"id":64,"slug":65,"title":66,"cover_image":67,"image_url":67,"created_at":68,"category":26},"c15f45ee-a548-4dbf-8152-91de159c1a11","llmbda-calculus-agent-safety-rules-zh","LLMbda 演算替 AI 代理人立安全規則","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778825503412-mlbf.png","2026-05-15T06:10:34.832664+00:00",{"id":70,"slug":71,"title":72,"cover_image":73,"image_url":73,"created_at":74,"category":26},"0c02225c-d6ff-44f8-bc92-884c8921c4a3","low-complexity-beamspace-denoiser-mmwave-mimo-zh","更簡單的毫米波波束域去噪器","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778814650361-xtc2.png","2026-05-15T03:10:30.06639+00:00",{"id":76,"slug":77,"title":78,"cover_image":79,"image_url":79,"created_at":80,"category":26},"9d27f967-62cc-433f-8cdb-9300937ade13","ai-benchmark-wins-cyber-scare-defenders-zh","為什麼 AI 基準賽在資安領域的勝利，應該讓防守方警醒","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778807450006-nofx.png","2026-05-15T01:10:29.379041+00:00",{"id":82,"slug":83,"title":84,"cover_image":85,"image_url":85,"created_at":86,"category":26},"bc402dc6-5da6-46fc-9d66-d09cb215f72b","why-linux-security-needs-patch-wave-mindset-zh","為什麼 Linux 安全需要「補丁浪潮」思維","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778741449813-s2wn.png","2026-05-14T06:50:24.052583+00:00",[88,93,98,103,108,113,118,123,128,133],{"id":89,"slug":90,"title":91,"created_at":92},"f18dbadb-8c59-4723-84a4-6ad22746c77a","deepmind-bets-on-continuous-learning-ai-2026-zh","DeepMind 押注 2026 連續學習 AI","2026-03-26T08:16:02.367355+00:00",{"id":94,"slug":95,"title":96,"created_at":97},"f4a106cb-02a6-4508-8f39-9720a0a93cee","ml-papers-of-the-week-github-research-desk-zh","每週 ML 論文清單，為何紅到 GitHub","2026-03-27T01:11:39.284175+00:00",{"id":99,"slug":100,"title":101,"created_at":102},"c4f807ca-4e5f-47f1-a48c-961cf3fc44dc","ai-ml-conferences-to-watch-in-2026-zh","2026 AI 研討會投稿時程整理","2026-03-27T01:51:53.874432+00:00",{"id":104,"slug":105,"title":106,"created_at":107},"9f50561b-aebd-46ba-94a8-363198aa7091","openclaw-agents-manipulated-self-sabotage-zh","OpenClaw Agent 會自己搞砸自己","2026-03-28T03:03:18.786425+00:00",{"id":109,"slug":110,"title":111,"created_at":112},"11f22e92-7066-4978-a544-31f5f2156ec6","vega-learning-to-drive-with-natural-language-instructions-zh","Vega：使用自然語言指示進行自駕車控制","2026-03-28T14:54:04.847912+00:00",{"id":114,"slug":115,"title":116,"created_at":117},"a4c7cfec-8d0e-4fec-93cf-1b9699a530b8","drive-my-way-en-zh","Drive My Way：個性化自駕車風格的實現","2026-03-28T14:54:26.207495+00:00",{"id":119,"slug":120,"title":121,"created_at":122},"dec02f89-fd39-41ba-8e4d-11ede93a536d","training-knowledge-bases-with-writeback-rag-zh","用 WriteBack-RAG 強化知識庫提升檢索效能","2026-03-28T14:54:45.775606+00:00",{"id":124,"slug":125,"title":126,"created_at":127},"3886be5c-a137-40cc-b9e2-0bf18430c002","packforcing-efficient-long-video-generation-method-zh","PackForcing：短影片訓練也能生成長影片","2026-03-28T14:55:02.688141+00:00",{"id":129,"slug":130,"title":131,"created_at":132},"72b90667-d930-4cc9-8ced-aaa0f8968d44","pixelsmile-toward-fine-grained-facial-expression-editing-zh","PixelSmile：提升精細臉部表情編輯的新方法","2026-03-28T14:55:20.678181+00:00",{"id":134,"slug":135,"title":136,"created_at":137},"cf046742-efb2-4753-aef9-caed5da5e32e","adaptive-block-scaled-data-types-zh","IF4：神經網路量化的聰明選擇","2026-03-31T06:00:36.990273+00:00"]