[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"article-refdecoder-reference-conditioned-video-decoder-en":3,"article-related-refdecoder-reference-conditioned-video-decoder-en":36,"series-research-66608799-65b1-4143-afc1-d1457cdd696a":88},{"id":4,"title":5,"content":6,"summary":7,"source":8,"source_url":9,"author":10,"image_url":11,"keywords":12,"language":18,"translated_content":10,"views":19,"is_premium":20,"created_at":21,"updated_at":21,"cover_image":11,"published_at":22,"rewrite_status":23,"rewrite_error":10,"rewritten_from_id":24,"slug":25,"category":26,"related_article_id":27,"status":28,"google_indexed_at":29,"x_posted_at":10,"tweet_text":10,"title_rewritten_at":10,"title_original":10,"key_takeaways":30,"topic_cluster_id":34,"embedding":35,"is_canonical_seed":20},"66608799-65b1-4143-afc1-d1457cdd696a","RefDecoder adds reference conditioning to video decoders","\u003Cp data-speakable=\"summary\">RefDecoder conditions video decoders on a reference image to preserve detail and consistency.\u003C\u002Fp>\u003Cp>Video generation systems usually put most of the conditioning power into the denoising network, but leave the decoder unconditional. This paper argues that mismatch is part of why generated videos can lose detail or drift away from the input image during reconstruction and editing.\u003C\u002Fp>\u003Cp>Its answer is RefDecoder, a reference-conditioned video VAE decoder that injects a high-fidelity reference frame directly into the decoding path. For engineers building image-to-video or video editing pipelines, the practical appeal is simple: improve fidelity without retraining the whole system.\u003C\u002Fp>\u003Ch2>What problem this paper is trying to fix\u003C\u002Fh2>\u003Cp>The authors focus on a structural weakness in the de facto video generation stack. Latent diffusion models often use heavily conditioned denoising networks, but the decoder that turns latents back into pixels is usually unconditional. In other words, the model pays attention to the reference signal while denoising, then drops that signal at the very end.\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778912631938-tra3.png\" alt=\"RefDecoder adds reference conditioning to video decoders\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>That asymmetry matters because the decoder is where fine visual details are actually restored. If the decoder is not conditioned, it can blur or distort structure that should have been preserved from the input image. The paper says this leads to loss of detail and inconsistency relative to the source image.\u003C\u002Fp>\u003Cp>RefDecoder is designed to close that gap. Instead of treating decoding as a generic reconstruction step, it gives the decoder access to the reference image so it can preserve structural integrity all the way through upsampling.\u003C\u002Fp>\u003Ch2>How RefDecoder works in plain English\u003C\u002Fh2>\u003Cp>The core idea is straightforward: feed the reference image into the decoder alongside the denoised video latents. The paper does this with reference attention, which lets the decoder co-process both signals during each up-sampling stage.\u003C\u002Fp>\u003Cp>More specifically, a lightweight image encoder converts the reference frame into high-dimensional tokens. Those tokens are then combined with the denoised video latent tokens inside the decoder. The result is a decoder that can recover details using the original reference as a guide, rather than trying to infer everything from the latent alone.\u003C\u002Fp>\u003Cp>This is a useful design choice because it keeps the change local. The paper says RefDecoder can be swapped into existing video generation systems without additional fine-tuning, which makes it easier to adopt than a full pipeline redesign.\u003C\u002Fp>\u003Cp>In practical terms, that means a team with an existing latent video generator can potentially upgrade the decoding stage and gain better fidelity without reworking the training recipe from scratch. The paper positions this as a decoder-level fix for a problem that usually gets blamed on the generative model as a whole.\u003C\u002Fp>\u003Ch2>What the paper actually shows\u003C\u002Fh2>\u003Cp>The paper reports consistent improvements across multiple decoder backbones, including Wan 2.1 and VideoVAE+. It also says the approach works across several reconstruction benchmarks: Inter4K, WebVid, and Large Motion.\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778912641328-jrqk.png\" alt=\"RefDecoder adds reference conditioning to video decoders\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>The strongest concrete number in the abstract is up to +2.1 dB PSNR over unconditional baselines. That is a reconstruction metric, so the result suggests RefDecoder is better at preserving pixel-level fidelity to the reference input.\u003C\u002Fp>\u003Cp>The authors also report better scores on the VBench I2V \u003Ca href=\"\u002Ftag\u002Fbenchmark\">benchmark\u003C\u002Fa>, specifically across subject consistency, background consistency, and overall quality. That is important because image-to-video generation is not just about sharp frames; it is about keeping the subject and scene stable over time.\u003C\u002Fp>\u003Cp>Beyond image-to-video, the paper says RefDecoder generalizes to style transfer and video editing refinement. That suggests the method is not narrowly tied to one task, although the abstract does not provide separate numbers for those additional use cases.\u003C\u002Fp>\u003Cul>\u003Cli>Improves several decoder backbones, including Wan 2.1 and VideoVAE+\u003C\u002Fli>\u003Cli>Reports up to +2.1 dB PSNR on Inter4K, WebVid, and Large Motion\u003C\u002Fli>\u003Cli>Improves subject consistency, background consistency, and overall quality on VBench I2V\u003C\u002Fli>\u003Cli>Can be swapped into existing systems without additional fine-tuning\u003C\u002Fli>\u003C\u002Ful>\u003Ch2>Why developers should care\u003C\u002Fh2>\u003Cp>If you are building video generation, image-to-video, or editing tools, the decoder is often an underappreciated place to improve output quality. This paper makes the case that conditioning should not stop at the denoiser; the final reconstruction stage matters too.\u003C\u002Fp>\u003Cp>That matters especially for workflows where the output must stay faithful to a source image, such as product demos, character animation, style transfer, or video refinement. In those settings, small structural errors can be more noticeable than general motion quality.\u003C\u002Fp>\u003Cp>The no-fine-tuning claim is also practically relevant. A decoder swap is easier to evaluate than a full model retrain, and it lowers the barrier for experimentation in existing systems. Even if the gains are modest, an architectural drop-in that improves consistency can be attractive for teams trying to reduce artifact rates.\u003C\u002Fp>\u003Cp>At the same time, the abstract leaves some open questions. It does not provide runtime cost, memory overhead, or latency impact from adding reference attention. It also does not spell out whether the gains hold equally across all tasks or only on the benchmarks listed.\u003C\u002Fp>\u003Cp>So the main takeaway is not that RefDecoder replaces video generation models. It is that a small change in the decoding stage can recover detail the standard pipeline tends to lose, and that may be enough to make generated video look materially closer to the source.\u003C\u002Fp>\u003Ch2>What is still unclear\u003C\u002Fh2>\u003Cp>The paper is promising, but the abstract does not answer every engineering question. We do not get training cost, \u003Ca href=\"\u002Ftag\u002Finference\">inference\u003C\u002Fa> speed, or the exact complexity added by the reference encoder and attention mechanism.\u003C\u002Fp>\u003Cp>We also do not see benchmark tables here, so the abstract gives only the headline PSNR gain and qualitative benchmark improvements. That means practitioners should treat the result as encouraging, but still validate it in their own stack and on their own content distribution.\u003C\u002Fp>\u003Cp>Even with those caveats, the paper is a useful reminder: in generative video, the decoder is not just a formatting step. If you want sharper, more faithful outputs, conditioning the decoder itself may be the missing piece.\u003C\u002Fp>","RefDecoder feeds reference image detail into video decoders, improving consistency and reconstruction without extra fine-tuning.","arxiv.org","https:\u002F\u002Farxiv.org\u002Fabs\u002F2605.15196",null,"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778912631938-tra3.png",[13,14,15,16,17],"video generation","decoder conditioning","reference attention","latent diffusion","image-to-video","en",0,false,"2026-05-16T06:23:34.829247+00:00","2026-05-16T06:23:34.818+00:00","done","50da1a2f-5cd6-48d9-8aa5-e75b6add633b","refdecoder-reference-conditioned-video-decoder-en","research","001e062e-f246-4bf0-aa04-27506febcf7b","published","2026-05-16T09:00:16.573+00:00",[31,32,33],"RefDecoder conditions the video decoder on a reference image to preserve detail.","It reports up to +2.1 dB PSNR and better consistency on VBench I2V.","The method can be swapped into existing systems without additional fine-tuning.","3103988e-c4fe-45e3-98ab-846500c9d507","[-0.03939522,-0.008610373,0.0043457197,-0.094728865,-0.036182564,0.02077958,-0.0060494947,-0.019153576,0.02388776,-0.0150731085,0.015325813,-0.035223912,0.01053839,0.021399993,0.123278275,0.017067194,-0.0044024177,-0.0055117826,0.007148279,-0.003397856,0.0021598844,0.01279278,-0.01847448,0.0008088717,-0.011261248,0.00556295,0.021348627,0.009457882,0.06282362,0.012117686,0.0076111546,0.00031450784,0.04048071,0.026277289,-0.0032510294,0.0131072225,-0.010184006,0.0062287897,0.010888916,0.024105523,-0.024585348,-0.01613924,0.014780435,-0.0030987735,-0.016282234,0.002898562,0.023398796,-0.03266511,-0.005251505,0.01799824,-0.007777439,0.020307537,-0.03212355,-0.15963642,0.0040856493,0.022626493,0.031615477,-0.00090038695,-0.02255662,0.012880864,0.00427706,-0.0003600695,0.006299942,-0.031421494,-0.009545138,-0.020294359,-0.0010347988,-0.00613032,-0.0126009155,0.00013836448,-0.0004913981,0.020263307,-0.0041558547,0.0131263025,0.048697878,-0.016427662,-0.014035607,0.008294299,0.014690109,0.03799781,0.010917005,-0.0049562734,-0.0031429492,-0.012193325,0.0138514675,0.016019903,-0.02632908,0.0059847594,0.006491616,0.0040689367,0.010858171,0.0057085175,0.016931731,0.02361716,-0.007844441,0.007830301,0.0065610735,-0.019277714,-0.00508827,0.013104782,0.0011248892,-0.028319668,0.02502878,0.012897202,0.013626019,-0.026969694,-0.016235672,0.008338757,-0.0076117557,0.016662518,0.00665124,-0.0019586943,-0.010774216,0.010679923,-0.012065477,-0.13010481,-0.0020629782,0.012816282,-0.015897788,0.01813948,-0.012427291,0.013256752,-0.0066733756,-0.00086823857,0.009926394,-0.0072750426,0.028521964,-0.0039916276,-0.035344586,0.016532434,-0.03220407,-0.015352206,0.01767992,-0.024073545,0.012338059,0.013654937,0.018726591,-0.027868427,-0.0062872395,0.008256368,0.0015403215,0.0038547372,0.0072050705,-0.0081312405,-0.011797766,-0.0222313,-0.04464017,0.007836318,-0.018250637,-0.027950158,0.0064280503,-0.0034612685,0.013944466,-0.0037647234,0.035111338,-0.00812287,-0.015053649,0.010834323,0.0067082294,-0.006534276,0.01919801,0.0017581295,0.01477214,-0.0034980276,0.006669096,0.01117481,0.015231239,-0.0033443517,-0.008975817,0.011715593,-0.025459116,0.0101296315,-0.0021593152,-0.0257256,0.016911346,-0.0020850115,-0.0025120205,-0.03777308,0.0008025434,-0.011739181,-0.011033677,-0.0050577,-0.004785051,-0.009317009,0.048513424,-0.0050993576,-2.2155438e-05,-0.013711058,0.027170029,-0.0032353671,0.002314897,-0.004708395,0.015735444,-0.005434317,0.0040853843,-0.008923418,0.011152374,0.004326816,0.039868213,0.0173339,-0.015623189,-0.0020566466,0.033051454,0.001304766,-0.020344354,0.012087892,0.0052480097,0.00760179,-0.0008075773,-0.009690064,0.004778744,-0.0013654293,0.019144233,-0.015066677,0.0061904625,-0.0014024236,-0.026581671,-0.024051221,-0.013415208,-0.04151258,0.03154189,-0.012886899,0.0050106538,-0.0077356873,-0.01156021,-0.05171696,-0.01299648,0.025490634,0.022529567,0.010047546,-0.012866144,0.008773626,-0.010626332,-0.02731488,-0.014347865,0.006717294,-0.016290115,0.011214714,0.048904747,0.017112788,-0.025317071,0.013931607,0.01346189,0.0076372777,0.028032802,0.016748996,0.0040094373,0.009816676,0.013971045,0.0036676386,-0.016187046,-0.0049389065,-0.009017728,-0.02969366,-0.014768887,0.03355829,0.0058382535,-0.006632145,-0.0010730746,-0.0030428092,-0.010082283,0.013805711,0.0024434656,-0.0024809693,0.028280146,0.017795654,0.013958335,0.008306095,-0.026739933,0.01282261,0.0199123,0.015522232,0.011544932,-0.010720217,-0.0020606075,-0.0047902996,-0.018145313,0.026196066,0.0026635074,-0.0297518,-0.022441925,0.030070804,0.010923858,0.012272196,0.009555241,0.01274095,0.0035884602,-0.057528213,0.0061007896,-0.008415147,-0.010193972,0.019686684,0.010713129,-0.011005466,0.007732546,-0.021502415,0.0012584142,-0.010896102,0.014742539,0.027657168,0.022217924,0.006996467,0.018842345,0.029968308,-0.012067353,-0.015645102,-0.012786973,0.039875742,0.007370857,0.015829843,-0.014054549,0.010018169,-0.038295105,-0.008286466,-0.018412821,0.0059063896,0.013344458,0.003996556,4.2323925e-05,-0.0078081614,-0.016500123,-0.013671997,-0.012214055,0.0065372777,-0.01931102,-0.00744561,-0.013148289,-0.043426353,-0.018643592,-0.012003094,-0.020095862,0.0010315252,0.0051121465,-0.028384797,0.0091902185,0.0029426331,0.02276037,-0.005592963,-0.03490188,-0.015749041,0.013238897,-0.0035387306,0.00039704196,0.00031697264,-0.029533926,0.02064528,0.011059357,0.0030555024,0.016180078,-0.031625263,-0.0007007753,0.003977969,0.012597132,-0.018148072,0.015036932,0.01848098,-0.0010092347,-0.0066362447,0.02568863,-0.0051288446,-0.008266257,0.026024275,-0.031186832,-0.017488936,-0.0024049028,-0.0037903183,-0.008645932,-0.007112303,-0.006745262,-0.018386507,0.018089436,-0.015572578,0.0008397498,-0.010058331,-0.024525946,0.004290901,-0.004047655,-0.036356535,-0.021450613,0.034336694,0.011055617,0.020516785,-0.016939018,-0.0289899,-0.018462056,0.01571057,0.011450805,-0.00797928,-0.027131055,0.037019566,-0.00041362504,-0.0060139163,-0.01676149,0.010690955,0.025805445,0.025552895,0.003577225,0.016165165,-0.0031005046,-0.009313902,0.010604674,0.012774189,-0.013084682,-0.014478421,0.01771065,0.0216663,0.012803878,0.0100122895,-0.029342746,-0.01018881,0.009498283,-0.018713498,0.05231045,-0.0181411,0.020882081,0.016817594,-0.030004585,-0.008747719,0.007344864,0.0041843858,0.019550728,-0.020683434,0.008126938,0.021287592,-0.0035566858,0.010341629,-0.011661044,-0.0060585816,0.024224456,-0.017660236,-0.008984578,-0.012647643,0.005153443,-0.021142632,0.019055296,-0.046809442,0.005430642,0.006594545,-0.034205772,-0.0019585956,-0.022626242,-0.027623372,0.0003941293,0.025044767,0.016941333,0.006322068,-0.007843712,-0.010683648,-0.028515765,0.016324183,0.032369073,0.030022344,0.010777111,-0.028690618,-0.0014284268,0.02257023,0.016399058,0.0049406975,-0.004275822,-0.042912427,0.0040051513,-0.01723419,0.0084113665,0.031748302,0.020193107,-0.006100862,-0.026466163,-0.0044575064,-0.0454377,0.011755749,0.024229376,0.0007894617,0.02529079,-0.0037212078,0.0029089765,9.945572e-05,0.0034557139,-0.013420296,0.0070706345,0.0009742636,0.019519074,0.012240601,0.00071114366,0.027438072,0.022350669,0.014328574,-0.008519589,0.008301849,-0.017949643,0.0011033685,0.042263396,0.029312013,-0.011255552,0.008130023,-0.0058842395,-0.010033205,-0.008219173,0.013772403,0.005684166,-0.013045637,0.02806799,-0.012347,0.020506132,-0.0059251995,-0.011919519,-0.01011717,-0.022655157,0.0014677689,0.0069950633,-0.009415055,-0.009839297,-0.022633435,0.018197691,0.005194254,0.018600393,0.0379642,-0.011144738,-0.006679574,-0.011337761,-0.021233317,-0.015958378,0.015982661,0.0051581925,-0.01814175,-0.009571311,-0.020401664,0.01429622,0.013743828,0.0054184375,0.029209,0.0045740115,0.01859843,9.35759e-05,0.009936879,0.01454378,-0.0036976242,0.01280063,-0.010770259,-0.0077624302,0.018724658,0.0012910534,-0.0056476477,0.00043932177,-0.034827102,0.027074067,-0.0781349,0.02048122,0.016975837,-0.016303081,0.014122641,0.014168998,-0.003849795,-0.027935876,0.00931763,-0.0051011047,0.002837991,-0.009936541,0.003655143,0.024395676,0.02177548,0.0027236189,0.016094571,-0.0011877213,-0.0018325207,-0.022370221,0.019566135,-0.012676479,0.028727878,0.025542784,-0.0030615139,-0.009567383,0.010505203,0.028992428,0.019347114,0.017870504,-0.023190958,-0.022712732,0.00694435,0.016956454,0.024139378,-0.005134912,0.013734231,-0.0036891245,-0.0032537028,-0.021326674,0.0029729863,0.033822864,-0.0043833754,-0.017314814,0.013228982,0.025254745,-0.00033233035,0.022249848,0.012043943,0.00037624154,-0.042602547,-0.010973098,0.0032624537,-0.028233627,0.0043891515,-0.028316416,-0.00298326,0.006953467,-0.015636427,0.013685774,-0.008420612,-0.03404843,-0.0062297382,0.028216807,0.0032281252,0.015232653,-0.013082686,0.042906195,-0.004654113,0.0033498758,-0.017321039,-0.0057008,0.0242639,0.032022767,0.022339735,-0.013046798,-0.0034829378,0.001611906,0.0016934597,-0.018629016,-0.005340446,-0.03623771,-0.11365231,-0.02159521,0.012363414,0.02182771,0.024588687,0.004132578,0.0013277592,-0.021722658,0.012030389,-0.012031997,-0.010624718,0.00037196736,-0.005633927,0.0061773243,-0.0025572975,0.011109611,-0.0025434643,-0.023309667,0.009323257,-0.036803566,-0.02571686,-0.014883534,-0.0021282523,0.007025106,-0.008782594,0.013884246,-0.014071477,0.008287518,0.017285736,-0.027509162,-0.005932126,-0.12226108,0.0018472782,-0.0044018896,-0.0026383908,-0.002261763,0.009298555,-0.013692564,0.0035434498,0.0051983767,-0.019755362,-0.02612176,-0.018702563,-0.011715575,-0.009387005,0.015997125,0.112099245,0.00945352,-0.008985001,-0.014917692,-0.025689345,-0.010525652,-0.027946781,-0.0035497674,0.028054062,0.024021521,0.0046548457,0.029376442,-0.0065264623,-0.020647014,0.013481749,0.0096627185,0.010115851,-0.005564491,-0.014583714,0.032540057,0.005131069,0.0036544737,-0.002528121,0.0016223515,0.01588328,0.039717734,0.006426415,-0.0003415491,-0.010553618,0.008287265,-0.0011667976,0.008560801,-0.034562804,-0.020237796,0.016073452,-0.026719589,-0.04888707,-0.008024977,0.004054776,0.008119024,-0.004480108,-0.021979786,0.016159588,0.014443723,-0.008147742,0.011800997,0.008303759,0.011611774,0.0064782044,-0.022051532,-0.0028751565,-0.005934757,0.038396467,0.009649762,-0.005770875,-0.007331229,0.008199991,0.011163442,-0.0023094164,0.00010689413,-0.016851261,-0.009792021,7.548961e-05,0.03068373,-0.007003564,0.0065470194,-0.00045107683,-0.018999808,-0.022569075,0.0066251964,-0.0059803054,0.010301784,0.004464245,0.021539459,0.017987153,0.025397092,4.3350894e-05,-0.0074076974,-0.0063443272,-0.0064244173,0.021253934,0.012485485,0.0110449055,-0.0002048883,-0.021298394,0.035735425,-0.02911526,-0.018577792,-0.051441323,-0.0008054178,0.013625591,0.029285965,0.017518507,0.04102299,0.016548805]",{"tags":37,"relatedLang":47,"relatedPosts":51},[38,40,42,44,46],{"name":16,"slug":39},"latent-diffusion",{"name":14,"slug":41},"decoder-conditioning",{"name":15,"slug":43},"reference-attention",{"name":13,"slug":45},"video-generation",{"name":17,"slug":17},{"id":27,"slug":48,"title":49,"language":50},"refdecoder-reference-conditioned-video-decoder-zh","RefDecoder 讓影片解碼器吃參考圖","zh",[52,58,64,70,76,82],{"id":53,"slug":54,"title":55,"cover_image":56,"image_url":56,"created_at":57,"category":26},"3cb0da95-801d-485d-9583-539027365723","why-ai-safety-teams-are-wrong-blame-only-alignment-en","Why AI safety teams are wrong to blame only alignment","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778947422376-naaj.png","2026-05-16T16:03:17.251356+00:00",{"id":59,"slug":60,"title":61,"cover_image":62,"image_url":62,"created_at":63,"category":26},"d3d5812b-849a-4a6e-8c8c-d859618bd4b2","why-fine-tuning-llms-domain-tasks-right-default-en","Why fine-tuning LLMs for domain tasks is the right default","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778916227001-iu04.png","2026-05-16T07:23:33.047894+00:00",{"id":65,"slug":66,"title":67,"cover_image":68,"image_url":68,"created_at":69,"category":26},"2a05602e-4f77-4e7a-a073-0f3878a9d9de","atlas-one-token-visual-reasoning-en","ATLAS Makes Visual Reasoning Use One Token","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778912030332-58uq.png","2026-05-16T06:13:36.193661+00:00",{"id":71,"slug":72,"title":73,"cover_image":74,"image_url":74,"created_at":75,"category":26},"d60602fc-ed44-4c5e-8aa1-b0285672b8ba","entitybench-long-range-video-consistency-en","EntityBench Tackles Long-Range Video Consistency","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778911850469-mgcy.png","2026-05-16T06:10:29.577019+00:00",{"id":77,"slug":78,"title":79,"cover_image":80,"image_url":80,"created_at":81,"category":26},"94994abd-e24d-4fd1-b941-942d03d19acf","turboquant-seo-shift-small-sites-en","TurboQuant and the SEO Shift for Small Sites","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778840455122-jfce.png","2026-05-15T10:20:28.134545+00:00",{"id":83,"slug":84,"title":85,"cover_image":86,"image_url":86,"created_at":87,"category":26},"670a7f69-911f-41e8-a18b-7d3491253a19","turboquant-vllm-comparison-fp8-kv-cache-en","TurboQuant vs FP8: vLLM’s first broad test","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778839858405-b5ao.png","2026-05-15T10:10:37.219158+00:00",[89,94,99,104,109,114,119,124,129,134],{"id":90,"slug":91,"title":92,"created_at":93},"a2715e72-1fe8-41b3-abb1-d0cf1f710189","ai-predictions-2026-big-changes-en","AI Predictions for 2026: Brace for Big Changes","2026-03-26T01:25:07.788356+00:00",{"id":95,"slug":96,"title":97,"created_at":98},"8404bd7b-4c2f-4109-9ec4-baf29d88af2b","ml-papers-of-the-week-github-research-desk-en","ML Papers of the Week Turns GitHub Into a Research Desk","2026-03-27T01:11:39.480259+00:00",{"id":100,"slug":101,"title":102,"created_at":103},"87897a94-8065-4464-a016-1f23e89e17cc","ai-ml-conferences-to-watch-in-2026-en","AI\u002FML Conferences to Watch in 2026","2026-03-27T01:51:54.184108+00:00",{"id":105,"slug":106,"title":107,"created_at":108},"6f1987cf-25f3-47a4-b3e6-db0997695be8","openclaw-agents-manipulated-self-sabotage-en","OpenClaw Agents Can Be Manipulated Into Failure","2026-03-28T03:03:18.899465+00:00",{"id":110,"slug":111,"title":112,"created_at":113},"a53571ad-735a-4178-9f93-cb09b699d99c","vega-driving-language-instructions-en","Vega: Driving with Natural Language Instructions","2026-03-28T14:54:04.698882+00:00",{"id":115,"slug":116,"title":117,"created_at":118},"a34581d6-f36e-46da-88bb-582fb3e7425c","personalizing-autonomous-driving-styles-en","Drive My Way: Personalizing Autonomous Driving Styles","2026-03-28T14:54:26.148181+00:00",{"id":120,"slug":121,"title":122,"created_at":123},"2bc1ad7f-26ce-4f02-9885-803b35fd229d","training-knowledge-bases-writeback-rag-en","Training Knowledge Bases with WriteBack-RAG","2026-03-28T14:54:45.643433+00:00",{"id":125,"slug":126,"title":127,"created_at":128},"71adc507-3c54-4605-bbe2-c966acd6187e","packforcing-long-video-generation-en","PackForcing: Efficient Long-Video Generation Method","2026-03-28T14:55:02.646943+00:00",{"id":130,"slug":131,"title":132,"created_at":133},"675942ef-b9ec-4c5f-a997-381250b6eacb","pixelsmile-facial-expression-editing-en","PixelSmile Framework Enhances Facial Expression Editing","2026-03-28T14:55:20.633463+00:00",{"id":135,"slug":136,"title":137,"created_at":138},"6954fa2b-8b66-4839-884b-e46f89fa1bc3","adaptive-block-scaled-data-types-en","IF4: Smarter 4-Bit Quantization That Adapts to Your Data","2026-03-31T06:00:36.65963+00:00"]