[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tag-language-models":3},{"tag":4,"articles":11},{"id":5,"name":6,"slug":7,"article_count":8,"description_zh":9,"description_en":10},"3a123b8f-ad63-4875-82f2-2616713263ce","language models","language-models",3,"語言模型是生成式 AI 的核心，涵蓋預訓練、詞彙擴充、對齊與安全評估等議題。這裡會整理模型如何學習語意、處理新 token，以及在 jailbreak 與漏洞測試中暴露的風險。","Language models sit at the core of generative AI, spanning pretraining, token initialization, alignment, and security evaluation. This tag collects work on how LMs learn semantics, absorb new vocabulary, and where jailbreak tests expose failure modes.",[12,21,28],{"id":13,"slug":14,"title":15,"summary":16,"category":17,"image_url":18,"cover_image":18,"language":19,"created_at":20},"22c43f4e-8be9-4440-bd1b-74a00b60dfa3","llms-implicit-grammar-representations-en","Do LLMs Learn Grammar Beyond Likelihood?","A probe study finds hidden layers in language models encode grammaticality better than string probability, but not plausibility.","research","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778135464967-fzem.png","en","2026-05-07T06:30:35.804749+00:00",{"id":22,"slug":23,"title":24,"summary":25,"category":17,"image_url":26,"cover_image":26,"language":19,"created_at":27},"b712257f-129d-400a-bc73-5e1c3ab200a4","avise-ai-security-evaluation-framework-en","AVISE tests AI security with modular jailbreak evals","AVISE is an open-source framework for finding AI vulnerabilities, with a 25-case jailbreak test that flagged all nine models as vulnerable.","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776924767358-ocir.png","2026-04-23T06:12:31.125572+00:00",{"id":29,"slug":30,"title":31,"summary":32,"category":33,"image_url":34,"cover_image":34,"language":19,"created_at":35},"e487e7c6-aa22-484d-9555-46261cc7a91d","grounded-token-initialization-new-vocabulary-en","A Better Way to Seed New LM Tokens","GTI grounds new vocabulary tokens before fine-tuning, aiming to preserve distinctions that mean initialization tends to collapse.","blockchain","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775196588405-1a7u.png","2026-04-03T06:09:29.832749+00:00"]