[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"article-aisafetybenchexplorer-ai-safety-benchmarks-en":3,"tags-aisafetybenchexplorer-ai-safety-benchmarks-en":36,"related-lang-aisafetybenchexplorer-ai-safety-benchmarks-en":44,"related-posts-aisafetybenchexplorer-ai-safety-benchmarks-en":48,"series-research-6e6c4ade-4dae-48c3-9a94-a081e08ab931":85},{"id":4,"title":5,"content":6,"summary":7,"source":8,"source_url":9,"author":10,"image_url":11,"keywords":12,"language":18,"translated_content":10,"views":19,"is_premium":20,"created_at":21,"updated_at":21,"cover_image":11,"published_at":22,"rewrite_status":23,"rewrite_error":10,"rewritten_from_id":24,"slug":25,"category":26,"related_article_id":27,"status":28,"google_indexed_at":29,"x_posted_at":10,"tweet_text":10,"title_rewritten_at":10,"title_original":10,"key_takeaways":30,"topic_cluster_id":34,"embedding":35,"is_canonical_seed":20},"6e6c4ade-4dae-48c3-9a94-a081e08ab931","AISafetyBenchExplorer maps AI safety benchmarks","\u003Cp data-speakable=\"summary\">AISafetyBenchExplorer catalogs 195 \u003Ca href=\"\u002Ftag\u002Fai-safety\">AI safety\u003C\u002Fa> benchmarks to expose fragmented measurement.\u003C\u002Fp>\u003Cp>\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2604.12875\">AISafetyBenchExplorer: A Metric-Aware Catalogue of AI Safety Benchmarks Reveals Fragmented Measurement and Weak Benchmark Governance\u003C\u002Fa> is not a new safety model or a benchmark scorecard. It is a structured catalog of the safety-evaluation ecosystem itself, built to make sense of how AI safety benchmarks are defined, measured, and maintained across years of research.\u003C\u002Fp>\u003Cp>That matters because if you are building or evaluating AI systems, the benchmark layer is where a lot of the confusion starts. When benchmark definitions, metrics, and governance practices are scattered, it becomes harder to compare results, track what changed, or trust that a score means the same thing from one paper to the next.\u003C\u002Fp>\u003Ch2>What problem the paper is trying to fix\u003C\u002Fh2>\u003Cp>The paper starts from a practical problem: AI safety benchmarking is fragmented. Instead of one shared measurement framework, the field has accumulated many benchmarks over time, each with its own assumptions, metric choices, and documentation quality. That makes it difficult to answer basic engineering questions like which benchmark should be used for a given safety concern, whether two benchmarks are measuring the same thing, or how much confidence to place in a reported result.\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778739653161-5vdb.png\" alt=\"AISafetyBenchExplorer maps AI safety benchmarks\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>A second issue is governance. The title points to weak benchmark governance, and the abstract frames the work as a way to reveal that problem through a catalog. In plain terms, the paper is trying to make the safety benchmark landscape legible enough that researchers and practitioners can see where the measurement stack is solid and where it is shaky.\u003C\u002Fp>\u003Cp>For developers, this is not just taxonomy for taxonomy’s sake. Benchmarks shape model selection, fine-tuning priorities, safety audits, and release decisions. If the underlying measurement system is inconsistent, downstream decisions can be too.\u003C\u002Fp>\u003Ch2>How AISafetyBenchExplorer is organized\u003C\u002Fh2>\u003Cp>The core contribution is AISafetyBenchExplorer, a structured catalog of 195 AI safety benchmarks released between 2018 and 2026. The abstract says the catalog is organized through a multi-sheet schema, which records benchmark-level metadata, metric-level definitions, benchmark-paper metadata, and related information.\u003C\u002Fp>\u003Cp>That “metric-aware” framing is important. Many benchmark lists stop at names, dates, or broad topic labels. This project goes deeper by representing how each benchmark is measured, not just what it is called. In practice, that kind of schema is what lets a dataset support cross-benchmark analysis instead of just serving as a bibliography.\u003C\u002Fp>\u003Cp>Although the abstract does not spell out every field in the schema, the wording suggests the catalog is meant to support structured comparison across benchmarks rather than one-off reading. For engineers, that usually means the data is designed for filtering, grouping, and spotting gaps: for example, which benchmarks have clear metric definitions, which ones are poorly documented, and where the field has duplicated effort.\u003C\u002Fp>\u003Ch2>What the paper actually shows\u003C\u002Fh2>\u003Cp>The concrete result available in the abstract is the catalog itself: 195 benchmarks spanning an eight-year window from 2018 through 2026. The abstract does not provide benchmark scores, model rankings, or performance numbers, so there is no quantitative leaderboard to interpret here.\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778739646209-jgt2.png\" alt=\"AISafetyBenchExplorer maps AI safety benchmarks\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>Instead, the paper claims a descriptive finding: the catalog reveals fragmented measurement and weak benchmark governance. That is the main takeaway. The paper is using structured metadata to surface patterns in the benchmark ecosystem, not to prove that one model is safer than another.\u003C\u002Fp>\u003Cp>Because the abstract is brief, it does not give detailed breakdowns such as how many benchmarks fall into each safety category, what metric families dominate, or which governance failures are most common. Those may be in the full paper, but they are not visible in the source material provided here. So the safest reading is that the paper’s evidence is catalog-based and descriptive, not a new experimental benchmark result.\u003C\u002Fp>\u003Cul>\u003Cli>195 AI safety benchmarks are cataloged.\u003C\u002Fli>\u003Cli>The release window covered is 2018 to 2026.\u003C\u002Fli>\u003Cli>The schema records benchmark-level metadata and metric-level definitions.\u003C\u002Fli>\u003Cli>The paper highlights fragmented measurement and weak benchmark governance.\u003C\u002Fli>\u003C\u002Ful>\u003Ch2>Why developers should care\u003C\u002Fh2>\u003Cp>If you work on AI systems, safety evaluation is only as good as the benchmark stack behind it. A catalog like this can help teams decide whether a benchmark is suitable for a product decision, a research comparison, or a compliance-style review. It can also help teams avoid over-trusting a single score that may be built on unclear or inconsistent measurement.\u003C\u002Fp>\u003Cp>This kind of work is especially useful when multiple teams inside a company use different safety checks. A structured catalog can become a shared reference for naming, scoping, and comparing benchmarks. That makes it easier to answer questions like: Are we using overlapping metrics? Are we evaluating the same safety property in different ways? Do our chosen benchmarks have enough documentation to support a release gate?\u003C\u002Fp>\u003Cp>There is also a broader tooling angle. Metric-aware catalogs can support internal benchmark registries, evaluation dashboards, and audit trails. Even if the paper itself is not a software system, the structure it describes is the kind of thing that can be turned into one.\u003C\u002Fp>\u003Ch2>Limits and open questions\u003C\u002Fh2>\u003Cp>The biggest limitation is that the abstract gives us the catalog, not the full analysis. We know the paper claims fragmented measurement and weak governance, but the source text does not show the exact criteria used to judge those issues or how severe they are across the benchmark set.\u003C\u002Fp>\u003Cp>We also do not get benchmark-level performance numbers, inter-rater agreement, or examples of specific benchmarks in the abstract. That means the practical value here is in the map, not in a new measured improvement. If you are looking for a paper that says one safety method outperforms another, this is not that paper.\u003C\u002Fp>\u003Cp>The open question is whether this catalog becomes a living resource or a snapshot. In a fast-moving area like AI safety, the value of a benchmark catalog depends on how well it can be updated, how consistently its schema is maintained, and whether the community actually uses it to reduce duplication and raise the quality bar.\u003C\u002Fp>\u003Cp>For now, AISafetyBenchExplorer looks most useful as infrastructure for the safety-evaluation ecosystem: a way to see what exists, how it is measured, and where the field still lacks clean governance. That is not flashy, but it is exactly the kind of work that makes later safety claims easier to trust.\u003C\u002Fp>","A catalog of 195 AI safety benchmarks shows how fragmented measurement and weak governance make safety evaluation hard to compare.","arxiv.org","https:\u002F\u002Farxiv.org\u002Fabs\u002F2604.12875",null,"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778739653161-5vdb.png",[13,14,15,16,17],"AI safety","benchmarks","evaluation metrics","governance","catalog","en",2,false,"2026-05-14T06:20:29.016052+00:00","2026-05-14T06:20:29.003+00:00","done","5c837792-1148-4ede-a174-50d504292363","aisafetybenchexplorer-ai-safety-benchmarks-en","research","0eb3d74f-c737-41a4-8b9b-fc30b2b3b0ac","published","2026-05-14T09:00:16.986+00:00",[31,32,33],"It catalogs 195 AI safety benchmarks from 2018 to 2026.","The schema tracks benchmark metadata and metric-level definitions.","The paper highlights fragmented measurement and weak benchmark governance.","3103988e-c4fe-45e3-98ab-846500c9d507","[-0.041664664,-0.022697013,0.025198787,-0.061173324,-0.0057832105,-0.017032448,-0.00062371494,-0.0024954327,0.015250342,0.037678346,-0.010585158,0.0127493525,0.04112632,0.012664627,0.13710864,0.035411038,0.02434687,0.023337886,-0.0041274433,-0.019212345,0.020727774,0.006832482,-0.03275404,-0.04431432,-0.01855085,-0.007600939,-0.0159968,0.016018767,0.0027519623,0.0045034266,-0.0050498587,0.010093209,0.007872768,0.014523806,0.021408042,0.0017364058,-0.00018119134,-0.0015759825,0.0121188415,0.01665477,0.0059349756,0.007481335,0.0075824796,-0.021083653,-0.029529873,0.019837108,0.018601438,0.0034337263,-0.025792368,0.0014863615,-0.0025154725,0.02086156,-0.01514797,-0.17214647,-0.0108436495,-0.0028048705,0.0065850574,0.002864521,0.017141767,0.010575036,-0.025158858,0.0055784453,-0.023761261,-0.01358458,0.014635527,-0.012782081,0.013240489,0.00024029466,-0.015852064,-0.0043076198,-0.033136774,-0.00818494,-0.008594523,-0.03368047,-0.013397028,-0.0018380799,0.008673521,-0.035597064,-0.02395552,0.016585227,-0.01561828,-0.0019718208,0.013033666,-0.026485177,-0.012304679,-0.013155745,0.011849673,-0.011118834,0.008745809,0.016724989,0.0043777036,0.014202474,-0.025167277,-0.018270118,0.014186099,-0.008838093,-0.0053502475,0.0148858875,-0.0041353237,0.01487057,-0.015207012,-0.016429504,-0.005827751,-0.0057536936,0.020497896,-0.011495955,-0.0025204164,-0.02285271,-0.0034369624,0.014895721,0.020253938,-0.026521435,-0.004946213,-0.014966583,-0.024118545,-0.13359715,-0.019374441,0.0061164494,-0.0064794654,-0.007943869,-0.021982988,0.011681674,0.010821664,0.05266558,-0.0031143655,-0.03933015,0.016667394,-0.0035041485,-0.013751173,0.016397964,-0.031071017,-0.0031059324,0.023516672,-0.02068573,0.0075963633,0.017160684,0.0109187905,-0.014792065,0.0021114936,-0.03793087,-0.0023834947,0.037812263,0.0019572694,-0.00016086899,-0.028781956,-0.005935061,-0.042185307,0.010699751,0.0009045196,0.0005620971,0.03434638,-0.016972113,-0.020506952,-0.011996297,0.029693445,-0.015063527,0.013654016,0.012933691,0.011599517,0.0033971262,0.0063133696,0.002475222,-0.01722541,0.00067785167,0.01668069,0.039575525,0.017816491,-0.0019148845,0.011763551,0.027947735,0.0026939071,-0.016567528,0.005521019,-0.007048117,-0.0089783585,0.00709864,0.027601367,0.03486546,-0.0045062792,0.009574341,0.037000116,-0.0011146942,-0.0041609537,0.014071048,-0.0048012845,0.011471081,0.0013917797,0.01748399,0.042071465,0.011398622,-0.02083142,-0.009442819,0.013999929,-0.026617423,-0.003555653,-0.016061751,0.01706807,-0.010098972,0.01182499,0.033436004,-0.007474619,0.004701471,0.034936234,-0.005944083,-0.009189964,-0.010205304,-0.005075528,-0.015749604,-0.004336113,-0.018168967,-0.030885844,-0.0008791898,0.0030432665,-0.0035797968,0.030535864,-0.03507874,-0.013041979,0.016432285,0.030765576,-0.009706963,0.011479767,0.0010035925,0.006142742,-0.0012015646,-0.02720938,-0.010615915,0.0055805035,-0.0074628904,-0.028619159,0.023949597,-0.005292782,0.030188795,-0.020357288,-0.027428858,0.024034165,0.008115935,-0.0051792352,0.008758457,0.00927313,0.011815212,-0.026585296,-0.013010534,-0.0074797,0.01693111,0.036216073,-0.02076811,0.025006222,0.0042322166,-0.0072228727,0.017518817,0.00092141255,0.0072174366,0.0026222083,-0.004715149,0.007981167,-0.0027151422,0.010202415,0.04119968,-0.026390033,-0.012237184,0.0087160235,0.014594282,-0.0064972714,0.0031455911,-0.0012880503,0.0042656977,0.0017954163,-0.015012662,0.0006690249,0.010796956,-0.020570653,0.013493778,0.0064779622,0.008214671,0.013781345,-0.011298475,-0.061502147,-0.0064426395,-0.0034893123,-0.0013947891,0.03200953,0.010831296,-0.013583335,0.020314604,0.016873615,0.0064836578,-0.03894875,0.010039749,0.01466338,-0.008046169,-0.012416779,-0.01230936,-0.007467015,0.014262121,-0.012085198,-0.019184968,0.021068532,0.006468364,-0.01232337,-0.0025179742,0.010976692,-0.015963996,0.024016447,0.060533263,-0.015760742,-0.014259865,-0.01933453,0.012265859,0.023109026,-0.010443934,-0.008061631,-0.011640681,-0.009749422,-0.038679842,-0.0099558,-0.032558467,0.007090972,-0.01714866,-0.007625503,-0.031459786,0.009934174,-0.010725685,-0.010796938,0.012233731,-0.015558514,-0.00966062,0.010361035,-0.01106479,0.015771791,-0.026101956,0.0019030992,0.020014277,0.008994155,-0.025110692,0.01804207,0.0017562273,0.0052844197,-0.018830067,0.010647206,-0.019429773,0.019768555,0.00920139,-0.0038989899,0.015251408,0.0033391388,0.023262044,0.02350614,-0.008399102,-0.0074921306,-0.006610674,0.048291773,0.012732805,-0.004718062,-0.018829245,-0.010252615,0.0062324754,-0.00055452785,-0.0035824464,0.010350835,0.004645932,-0.0062208096,-0.0008824922,-0.016179511,0.026181094,0.01700845,-0.039353732,-0.0029239315,-0.0064755683,-0.005162857,-0.00047541788,0.010612246,0.011908024,0.016691195,0.011005121,-0.020467067,0.009265909,-0.01982105,-0.012580997,-0.0065270877,0.0081243925,0.0053955796,0.0068805697,-0.01606674,-0.013977173,-0.0028651026,0.005926729,0.0006471385,0.031638045,-0.009842448,0.012391955,0.01702112,0.009682097,0.00206324,0.003375328,0.0002264753,0.012245175,-0.0042376076,0.013371934,-0.017261676,0.006483143,-0.0018184431,-0.026272126,-0.0091994805,-0.013988541,-0.0045518023,0.012610471,-0.020678494,0.02247847,0.004074395,0.005100085,-0.021852424,-0.0050383867,0.03426351,-0.013132047,-0.002951252,0.025176538,0.010362698,-0.006761025,-0.020549135,0.004539731,0.011136316,0.040472373,0.026847608,-0.004883618,-0.0142951375,0.0042201094,0.002812425,0.0086045,0.017719183,-0.01920181,0.0067458204,-0.0040128296,-0.012691512,-0.013827337,-0.06341557,-0.026532337,-0.021919612,-0.0010453422,-0.031614535,-0.009036645,-0.016426845,-0.0006287721,0.009004447,0.007056354,0.044325516,-0.04202661,-0.011306844,0.00051610946,-0.024499934,-0.0010771224,-0.005136832,0.018035516,0.01838065,-0.0069111176,-0.0050028446,-0.036836945,-0.0056216004,-0.012948826,0.0034240962,-0.010017435,0.0020596997,-0.05103187,-0.0025617117,0.036677655,-0.011873385,0.020219572,0.001816739,-0.036352962,0.008588003,0.00081914075,-0.011509274,0.011325883,0.021124238,0.015075746,-0.016527705,0.02434787,0.018280808,0.03518674,0.031009294,-0.011467509,0.00352691,-0.0043840986,-0.027677385,-0.0014840161,0.0045131617,0.030105894,0.0075267176,0.014881583,-0.0014813126,0.008001463,0.022080809,0.025614403,0.026402432,0.0018202089,-0.027776418,-0.007939349,0.010447195,-0.0010433823,0.01679091,-0.014834313,-0.020999083,-0.014605219,0.0377218,0.033099193,-0.016653782,-0.0047068084,0.018568493,0.0076205744,-0.012861944,-0.02275947,-0.011607452,0.0049607707,0.022913825,0.036107887,-0.0028184834,0.01347564,0.014870331,-0.0117950635,-0.01572886,-0.015939252,-0.007526876,-0.015008142,0.0063118553,0.004585357,-0.0010312054,-0.035774823,0.011659082,0.0033612328,0.023769569,0.029575597,-0.00698582,-0.013922536,0.0007256646,-0.009871348,-0.00048794178,-0.0041549467,0.0122468835,0.022416878,0.0039138226,0.002525949,-0.018937226,0.010542954,-0.02112632,-0.015660172,0.03413257,-0.0652005,0.010429314,0.009999725,-0.022642653,-0.018810334,0.0031907966,-0.0056960895,0.014563734,0.02173987,0.007460029,0.012194515,0.024680717,-0.0066142417,0.010281891,0.0031612087,-0.008163057,-0.0059698103,-0.0010261763,0.0014446435,-0.014015767,0.046787847,-0.020610983,0.010683849,-0.003654303,0.012459757,0.0072683617,0.010337965,0.00081969064,-0.0050560613,-0.009937276,0.0033049101,0.0136120245,-0.026834035,-0.003717915,0.009520094,-0.0042288625,0.010275177,-0.03570134,0.012572877,0.026435144,-0.007855616,0.024295965,-0.029814765,-0.0182827,-0.024263041,-0.013511072,-0.018562403,0.01854159,0.006532224,0.032924175,-0.025351265,0.013344156,-0.0013348064,-0.024553247,-0.022653628,-0.025344983,-0.026340475,-0.015435917,0.008084218,-0.0054676454,-0.010042938,-0.004734117,-0.015778488,0.018708615,-0.03691796,0.028057221,-0.0017899177,0.01688055,0.027041446,0.025394456,-0.0038774465,-0.028922938,-0.0060894466,0.021766433,0.00015497567,0.006754064,0.010265392,-0.0008246947,-0.021951726,-0.015480011,-0.03728147,-0.031658955,-0.0889384,-0.007617199,-0.02482523,0.0007704829,-0.0023382446,-0.014210027,0.001852563,-0.03145977,0.018767023,-0.0010937861,0.01123314,0.014701821,0.014480461,-0.011411244,-0.011653289,0.0048299287,-0.0045127105,0.007370068,0.015904445,-0.016728384,-0.015026329,-0.008061143,0.0049569644,0.007740293,-0.043458164,0.032305956,-0.022044286,0.00364275,-0.000237426,0.0033928195,-0.01160746,-0.13483517,-0.014144289,-0.02527046,0.0123760225,-0.005703834,-0.00011411888,0.017101202,0.009096948,0.004744307,-0.0049167243,-0.0122793745,0.0087676905,-0.01932905,-0.006176052,-0.013203618,0.10079264,-0.0155121125,-0.013027647,-0.029853871,-0.053113367,0.0019494445,-0.015532794,-0.013446608,-0.012635708,0.014386997,0.0049725887,0.036334135,-0.009271448,-0.007903108,0.030661313,0.0027877698,-0.027084827,-0.01799523,0.006174655,0.0133716175,-0.0015767894,-0.0124285845,-0.03699708,0.0038768523,-0.0109341955,0.031190086,0.009912364,0.026924253,0.015573653,-0.027979696,-0.021881422,0.00053186936,0.00053984596,0.01863276,0.03895187,-0.00025561842,-0.049442686,0.0064471145,-0.016829306,0.02554507,-0.010439767,-0.004056056,-0.0014981715,0.00230603,0.015060057,0.03406736,-0.02189146,-0.008452166,0.004364531,-0.029260162,-0.01961927,0.0068672826,-0.0032513225,0.0018214415,0.0044107637,-0.016093072,0.025481788,-0.007423129,0.00865976,-0.01814081,-0.025482846,0.0100621795,0.0142175555,0.010228638,-0.016597245,0.007526593,0.005596901,0.018453399,0.025629938,0.023325967,-8.136216e-05,0.021400029,-0.009015855,-0.0315893,-0.025608923,0.007033051,0.033861246,-0.0123007335,0.0029202849,0.013613751,-0.0011388571,0.011202512,-0.011219813,0.0026578594,0.008204271,-0.018871536,-0.006337748,0.025119707,-0.01786566,0.0014339327,0.010233015,0.041937,0.018910581,0.013736834,0.0131334495]",[37,38,40,41,42],{"name":17,"slug":17},{"name":15,"slug":39},"evaluation-metrics",{"name":14,"slug":14},{"name":16,"slug":16},{"name":13,"slug":43},"ai-safety",{"id":27,"slug":45,"title":46,"language":47},"aisafetybenchexplorer-ai-safety-benchmarks-zh","AISafetyBenchExplorer：AI 安全基準地圖","zh",[49,55,61,67,73,79],{"id":50,"slug":51,"title":52,"cover_image":53,"image_url":53,"created_at":54,"category":26},"94994abd-e24d-4fd1-b941-942d03d19acf","turboquant-seo-shift-small-sites-en","TurboQuant and the SEO Shift for Small Sites","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778840455122-jfce.png","2026-05-15T10:20:28.134545+00:00",{"id":56,"slug":57,"title":58,"cover_image":59,"image_url":59,"created_at":60,"category":26},"670a7f69-911f-41e8-a18b-7d3491253a19","turboquant-vllm-comparison-fp8-kv-cache-en","TurboQuant vs FP8: vLLM’s first broad test","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778839858405-b5ao.png","2026-05-15T10:10:37.219158+00:00",{"id":62,"slug":63,"title":64,"cover_image":65,"image_url":65,"created_at":66,"category":26},"5aef1c57-961f-49f7-8277-f83f7336799a","llmbda-calculus-agent-safety-rules-en","LLMbda calculus gives agents safety rules","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778825459914-obkf.png","2026-05-15T06:10:36.242145+00:00",{"id":68,"slug":69,"title":70,"cover_image":71,"image_url":71,"created_at":72,"category":26},"712a0357-f7cd-48f2-adde-c2691da0815f","low-complexity-beamspace-denoiser-mmwave-mimo-en","A simpler beamspace denoiser for mmWave MIMO","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778814646705-e7mx.png","2026-05-15T03:10:31.764301+00:00",{"id":74,"slug":75,"title":76,"cover_image":77,"image_url":77,"created_at":78,"category":26},"f595f949-6ea1-4b0e-a632-f1832ef26e36","ai-benchmark-wins-cyber-scare-defenders-en","Why AI benchmark wins in cyber should scare defenders","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778807444539-gz7f.png","2026-05-15T01:10:30.04579+00:00",{"id":80,"slug":81,"title":82,"cover_image":83,"image_url":83,"created_at":84,"category":26},"3ad202d1-9e5f-49c5-8383-02fcf1a23cf2","why-linux-security-needs-patch-wave-mindset-en","Why Linux security needs a patch-wave mindset","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778741441493-ikl6.png","2026-05-14T06:50:25.906256+00:00",[86,91,96,101,106,111,116,121,126,131],{"id":87,"slug":88,"title":89,"created_at":90},"a2715e72-1fe8-41b3-abb1-d0cf1f710189","ai-predictions-2026-big-changes-en","AI Predictions for 2026: Brace for Big Changes","2026-03-26T01:25:07.788356+00:00",{"id":92,"slug":93,"title":94,"created_at":95},"8404bd7b-4c2f-4109-9ec4-baf29d88af2b","ml-papers-of-the-week-github-research-desk-en","ML Papers of the Week Turns GitHub Into a Research Desk","2026-03-27T01:11:39.480259+00:00",{"id":97,"slug":98,"title":99,"created_at":100},"87897a94-8065-4464-a016-1f23e89e17cc","ai-ml-conferences-to-watch-in-2026-en","AI\u002FML Conferences to Watch in 2026","2026-03-27T01:51:54.184108+00:00",{"id":102,"slug":103,"title":104,"created_at":105},"6f1987cf-25f3-47a4-b3e6-db0997695be8","openclaw-agents-manipulated-self-sabotage-en","OpenClaw Agents Can Be Manipulated Into Failure","2026-03-28T03:03:18.899465+00:00",{"id":107,"slug":108,"title":109,"created_at":110},"a53571ad-735a-4178-9f93-cb09b699d99c","vega-driving-language-instructions-en","Vega: Driving with Natural Language Instructions","2026-03-28T14:54:04.698882+00:00",{"id":112,"slug":113,"title":114,"created_at":115},"a34581d6-f36e-46da-88bb-582fb3e7425c","personalizing-autonomous-driving-styles-en","Drive My Way: Personalizing Autonomous Driving Styles","2026-03-28T14:54:26.148181+00:00",{"id":117,"slug":118,"title":119,"created_at":120},"2bc1ad7f-26ce-4f02-9885-803b35fd229d","training-knowledge-bases-writeback-rag-en","Training Knowledge Bases with WriteBack-RAG","2026-03-28T14:54:45.643433+00:00",{"id":122,"slug":123,"title":124,"created_at":125},"71adc507-3c54-4605-bbe2-c966acd6187e","packforcing-long-video-generation-en","PackForcing: Efficient Long-Video Generation Method","2026-03-28T14:55:02.646943+00:00",{"id":127,"slug":128,"title":129,"created_at":130},"675942ef-b9ec-4c5f-a997-381250b6eacb","pixelsmile-facial-expression-editing-en","PixelSmile Framework Enhances Facial Expression Editing","2026-03-28T14:55:20.633463+00:00",{"id":132,"slug":133,"title":134,"created_at":135},"6954fa2b-8b66-4839-884b-e46f89fa1bc3","adaptive-block-scaled-data-types-en","IF4: Smarter 4-Bit Quantization That Adapts to Your Data","2026-03-31T06:00:36.65963+00:00"]