[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"article-what-rag-is-and-why-it-matters-zh":3,"tags-what-rag-is-and-why-it-matters-zh":37,"related-lang-what-rag-is-and-why-it-matters-zh":49,"related-posts-what-rag-is-and-why-it-matters-zh":53,"series-research-254c9611-aa49-4f96-be03-77c9c2f8007b":90},{"id":4,"title":5,"content":6,"summary":7,"source":8,"source_url":9,"author":10,"image_url":11,"keywords":12,"language":21,"translated_content":10,"views":22,"is_premium":23,"created_at":24,"updated_at":24,"cover_image":11,"published_at":25,"rewrite_status":26,"rewrite_error":10,"rewritten_from_id":27,"slug":28,"category":29,"related_article_id":30,"status":31,"google_indexed_at":32,"x_posted_at":10,"tweet_text":10,"title_rewritten_at":10,"title_original":10,"key_takeaways":33,"topic_cluster_id":10,"embedding":10,"is_canonical_seed":23},"254c9611-aa49-4f96-be03-77c9c2f8007b","RAG 是什麼，為何重要","\u003Cp data-speakable=\"summary\">RAG 讓 \u003Ca href=\"\u002Fnews\u002Fhow-to-build-vintage-llm-testbed-5-steps-zh\">LLM\u003C\u002Fa> 先查外部可信資料，再生成答案。\u003C\u002Fp>\u003Cp>說白了，它是在模型回答前先查資料。這比只靠記憶亂猜，可靠很多。\u003C\u002Fp>\u003Cp>\u003Ca href=\"https:\u002F\u002Faws.amazon.com\u002Fwhat-is\u002Fretrieval-augmented-generation\u002F\" target=\"_blank\" rel=\"noopener\">AWS\u003C\u002Fa> 把 RAG 定義成 Retrieval-Augmented Generation。它讓 \u003Ca href=\"https:\u002F\u002Fopenai.com\u002F\" target=\"_blank\" rel=\"noopener\">GPT\u003C\u002Fa>、\u003Ca href=\"https:\u002F\u002Fwww.anthropic.com\u002Fclaude\" target=\"_blank\" rel=\"noopener\">Claude\u003C\u002Fa> 這類 \u003Ca href=\"https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FLarge_language_model\" target=\"_blank\" rel=\"noopener\">LLM\u003C\u002Fa>，先從外部知識庫找資料，再組答案。這件事很實際。因為模型訓練資料是固定的，但政策、價格、文件、新聞都會變。\u003C\u002Fp>\u003Cp>你可能會想問，這不就是搜尋嗎？不是。搜尋只找資料。RAG 會把\u003Ca href=\"\u002Fnews\u002Fai-finds-nine-year-linux-kernel-zero-day-zh\">找到\u003C\u002Fa>的資料塞回 prompt，讓模型根據資料寫答案。講白了，就是先翻文件，再開口。\u003C\u002Fp>\u003Ctable>\u003Cthead>\u003Ctr>\u003Cth>RAG 概念\u003C\u002Fth>\u003Cth>AWS 的說法\u003C\u002Fth>\u003Cth>為什麼重要\u003C\u002Fth>\u003C\u002Ftr>\u003C\u002Fthead>\u003Ctbody>\u003Ctr>\u003Ctd>訓練資料\u003C\u002Ftd>\u003Ctd>靜態，帶有時間限制\u003C\u002Ftd>\u003Ctd>可能漏掉最新事實\u003C\u002Ftd>\u003C\u002Ftr>\u003Ctr>\u003Ctd>Retrieval\u003C\u002Ftd>\u003Ctd>從外部知識來源抓資料\u003C\u002Ftd>\u003Ctd>補進新鮮且具體的上下文\u003C\u002Ftd>\u003C\u002Ftr>\u003Ctr>\u003Ctd>Amazon Kendra Retrieve API\u003C\u002Ftd>\u003Ctd>最多 100 段 passages\u003C\u002Ftd>\u003Ctd>給模型更多可用來源\u003C\u002Ftd>\u003C\u002Ftr>\u003Ctr>\u003Ctd>Passage 大小\u003C\u002Ftd>\u003Ctd>每段最多 200 token words\u003C\u002Ftd>\u003Ctd>讓上下文保持精簡\u003C\u002Ftd>\u003C\u002Ftr>\u003C\u002Ftbody>\u003C\u002Ftable>\u003Ch2>RAG 為什麼會冒出來\u003C\u002Fh2>\u003Cp>\u003Ca href=\"\u002Ftag\u002Fllm\">LLM\u003C\u002Fa> 很會寫字，這點沒人否認。但它們也很會一本正經地亂講。AWS 提到幾個常見問題：模型會編答案、講得太空、引用不可靠來源，還會把不同文件裡的名詞混在一起。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1777958450449-qp57.png\" alt=\"RAG 是什麼，為何重要\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>這在客服、內部知識庫、企業搜尋裡很致命。使用者問的是今年的福利政策，模型卻吐出去年版本。這不是文風問題，這是信任問題。\u003C\u002Fp>\u003Cp>RAG 的做法很直接。先找資料，再生成答案。模型還是負責寫，但事實來源改成組織自己選的資料庫。這樣至少知道它是根據哪份文件在講。\u003C\u002Fp>\u003Cul>\u003Cli>不用為每個內部場景重訓整個 foundation model。\u003C\u002Fli>\u003Cli>可以抓最新文件、API 資料、公告或紀錄。\u003C\u002Fli>\u003Cli>開發者能控制模型能引用什麼。\u003C\u002Fli>\u003Cli>也能先檢查權限，再把資料送進 prompt。\u003C\u002Fli>\u003C\u002Ful>\u003Ch2>RAG 的實際流程\u003C\u002Fh2>\u003Cp>一個 RAG 系統通常從外部資料開始。可能是文件、\u003Ca href=\"\u002Ftag\u002Fapi\">API\u003C\u002Fa>、資料庫，或 \u003Ca href=\"\u002Ftag\u002Fgithub\">GitHub\u003C\u002Fa> repo。這些資料會先切塊，再轉成 embeddings，存進 \u003Ca href=\"\u002Ftag\u002Fvector-database\">vector database\u003C\u002Fa>。\u003C\u002Fp>\u003Cp>使用者提問後，query 也會被轉成向量。系統拿它去比對知識庫，挑出最相關的 passages。接著把這些內容放進 prompt，交給 LLM 生成答案。\u003C\u002Fp>\u003Cp>聽起來簡單，維運才是重點。資料一更新，embeddings 也要更新。你如果放著不管，retrieval 會撈到舊內容。那種錯法很陰險，因為答案看起來還是很順。\u003C\u002Fp>\u003Cblockquote>“Retrieval-augmented generation is the process of optimizing the output of a large language model, so it references an authoritative knowledge base outside of its training data sources before generating a response.” — \u003Ca href=\"https:\u002F\u002Faws.amazon.com\u002Fwhat-is\u002Fretrieval-augmented-generation\u002F\" target=\"_blank\" rel=\"noopener\">Amazon Web Services\u003C\u002Fa>\u003C\u002Fblockquote>\u003Ch2>RAG 和 semantic search 差在哪\u003C\u002Fh2>\u003Cp>AWS 把兩者分得很清楚。semantic search 是找資料的引擎。RAG 是完整流程。它先找，再寫。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1777958454861-td46.png\" alt=\"RAG 是什麼，為何重要\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>這個差別很重要。搜尋系統解決的是「哪段文字相關」。RAG 解決的是「拿到這段文字後，要怎麼寫成答案」。在企業環境裡，前者常常比後者更難。\u003C\u002Fp>\u003Cp>因為文件很多，而且散在各處。手冊、FAQ、客服紀錄、內部公告，全都可能是來源。這時候 semantic search 會先幫你縮小範圍，減少人工整理成本。\u003C\u002Fp>\u003Cul>\u003Cli>\u003Cstrong>Keyword search\u003C\u002Fstrong> 快，但容易漏掉換句話說的內容。\u003C\u002Fli>\u003Cli>\u003Cstrong>Semantic search\u003C\u002Fstrong> 找的是語意，不是字面。\u003C\u002Fli>\u003Cli>\u003Cstrong>RAG\u003C\u002Fstrong> 會把找到的內容變成回答。\u003C\u002Fli>\u003Cli>\u003Cstrong>權限控管\u003C\u002Fstrong> 可以先過濾文件，再進模型。\u003C\u002Fli>\u003C\u002Ful>\u003Ch2>AWS 提供哪些 RAG 工具\u003C\u002Fh2>\u003Cp>AWS 這邊主打三個產品：\u003Ca href=\"https:\u002F\u002Faws.amazon.com\u002Fbedrock\u002F\" target=\"_blank\" rel=\"noopener\">Amazon Bedrock\u003C\u002Fa>、\u003Ca href=\"https:\u002F\u002Faws.amazon.com\u002Fkendra\u002F\" target=\"_blank\" rel=\"noopener\">Amazon Kendra\u003C\u002Fa>，還有 \u003Ca href=\"https:\u002F\u002Faws.amazon.com\u002Fsagemaker\u002F\" target=\"_blank\" rel=\"noopener\">Amazon SageMaker JumpStart\u003C\u002Fa>。三者定位不一樣。\u003C\u002Fp>\u003Cp>Bedrock 偏向 managed foundation models，也提供 knowledge base 來做 RAG。Kendra 偏企業搜尋。SageMaker JumpStart 則比較像給團隊自己拼一套 ML 工作流。\u003C\u002Fp>\u003Cp>最具體的數字是 Kendra 的 Retrieve API。它最多可回傳 100 段 passages。每段最多 200 \u003Ca href=\"\u002Ftag\u002Ftoken\">token\u003C\u002Fa> words。這代表 AWS 想讓模型拿到夠多上下文，但又不想把 prompt 塞爆。\u003C\u002Fp>\u003Cp>如果你在選方案，可以這樣看：\u003C\u002Fp>\u003Cul>\u003Cli>\u003Cstrong>Amazon Bedrock\u003C\u002Fstrong> 適合想快點上線的人。\u003C\u002Fli>\u003Cli>\u003Cstrong>Amazon Kendra\u003C\u002Fstrong> 適合文件多、權限複雜的企業。\u003C\u002Fli>\u003Cli>\u003Cstrong>Amazon SageMaker JumpStart\u003C\u002Fstrong> 適合想自己組件的人。\u003C\u002Fli>\u003Cli>\u003Cstrong>Retrieval quality\u003C\u002Fstrong> 往往比模型大小更重要。\u003C\u002Fli>\u003C\u002Ful>\u003Ch2>背景脈絡：為什麼大家都在談 RAG\u003C\u002Fh2>\u003Cp>RAG 會紅，不是因為它很潮。是因為它很務實。很多團隊不想每次文件一改，就重訓模型。那太貴，也太慢。\u003C\u002Fp>\u003Cp>而且 LLM 的問題一直都在。它會 hallucin\u003Ca href=\"\u002Fnews\u002Fwhy-latent-agents-proves-internalized-debate-zh\">ate\u003C\u002Fa>，會講得很像真的。對消費者問答也許還能混過去。對企業文件、法務內容、產品規格，就很難混。\u003C\u002Fp>\u003Cp>所以現在很多公司先做 RAG，再談 fine-tuning。這個順序很合理。先把資料接好，先讓答案有來源，再想要不要改模型本體。\u003C\u002Fp>\u003Cp>這裡也能看出產業分工。模型供應商負責 LLM。雲端平台負責 retrieval、storage、權限與部署。開發團隊負責資料品質。三邊缺一個，效果都會掉。\u003C\u002Fp>\u003Ch2>接下來該怎麼看 RAG\u003C\u002Fh2>\u003Cp>我覺得，RAG 不是萬靈丹。資料亂、切塊爛、權限沒控好，答案一樣會出包。只是它比直接叫模型瞎答，至少多了一層把關。\u003C\u002Fp>\u003Cp>如果你在做客服機器人、內部知識庫、或文件型產品，RAG 很值得先試。先問自己一件事：你的使用者是不是需要最新、可追溯、來自你自己資料的答案？\u003C\u002Fp>\u003Cp>如果答案是 yes，那就別再只靠純生成。先把 retrieval 做好，再來談模型。這條路很務實，也比較少踩雷。\u003C\u002Fp>","RAG 讓 LLM 先查外部可信資料再回答，能降低幻覺、更新更快，也更適合企業文件與權限控管。","aws.amazon.com","https:\u002F\u002Faws.amazon.com\u002Fwhat-is\u002Fretrieval-augmented-generation\u002F",null,"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1777958450449-qp57.png",[13,14,15,16,17,18,19,20],"RAG","LLM","Retrieval-Augmented Generation","semantic search","AWS","Amazon Bedrock","Amazon Kendra","Amazon SageMaker JumpStart","zh",2,false,"2026-05-05T05:20:30.928679+00:00","2026-05-05T05:20:30.757+00:00","done","24c4ecc4-c374-4385-9e7a-73c8e5025564","what-rag-is-and-why-it-matters-zh","research","58c0fcc1-175d-4769-a1d6-0e7ef5eca477","published","2026-05-05T09:00:17.751+00:00",[34,35,36],"RAG 先查外部可信資料，再讓 LLM 回答。","它能降低幻覺，也更適合需要最新資料的場景。","RAG 的成敗常常卡在 retrieval、切塊和權限控管。",[38,41,43,45,47],{"name":39,"slug":40},"retrieval-augmented generation","retrieval-augmented-generation",{"name":13,"slug":42},"rag",{"name":17,"slug":44},"aws",{"name":14,"slug":46},"llm",{"name":16,"slug":48},"semantic-search",{"id":30,"slug":50,"title":51,"language":52},"what-rag-is-and-why-it-matters-en","What RAG Is and Why It Matters for LLMs","en",[54,60,66,72,78,84],{"id":55,"slug":56,"title":57,"cover_image":58,"image_url":58,"created_at":59,"category":29},"667b72b6-e821-4d68-80a1-e03340bc85f1","turboquant-seo-shift-small-sites-zh","TurboQuant 與小站 SEO 變化","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778840440690-kcw9.png","2026-05-15T10:20:27.319472+00:00",{"id":61,"slug":62,"title":63,"cover_image":64,"image_url":64,"created_at":65,"category":29},"381fb6c6-6da7-4444-831f-8c5eed8d685c","turboquant-vllm-comparison-fp8-kv-cache-zh","TurboQuant 與 FP8 實測結果","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778839867551-4v9g.png","2026-05-15T10:10:36.034569+00:00",{"id":67,"slug":68,"title":69,"cover_image":70,"image_url":70,"created_at":71,"category":29},"c15f45ee-a548-4dbf-8152-91de159c1a11","llmbda-calculus-agent-safety-rules-zh","LLMbda 演算替 AI 代理人立安全規則","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778825503412-mlbf.png","2026-05-15T06:10:34.832664+00:00",{"id":73,"slug":74,"title":75,"cover_image":76,"image_url":76,"created_at":77,"category":29},"0c02225c-d6ff-44f8-bc92-884c8921c4a3","low-complexity-beamspace-denoiser-mmwave-mimo-zh","更簡單的毫米波波束域去噪器","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778814650361-xtc2.png","2026-05-15T03:10:30.06639+00:00",{"id":79,"slug":80,"title":81,"cover_image":82,"image_url":82,"created_at":83,"category":29},"9d27f967-62cc-433f-8cdb-9300937ade13","ai-benchmark-wins-cyber-scare-defenders-zh","為什麼 AI 基準賽在資安領域的勝利，應該讓防守方警醒","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778807450006-nofx.png","2026-05-15T01:10:29.379041+00:00",{"id":85,"slug":86,"title":87,"cover_image":88,"image_url":88,"created_at":89,"category":29},"bc402dc6-5da6-46fc-9d66-d09cb215f72b","why-linux-security-needs-patch-wave-mindset-zh","為什麼 Linux 安全需要「補丁浪潮」思維","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778741449813-s2wn.png","2026-05-14T06:50:24.052583+00:00",[91,96,101,106,111,116,121,126,131,136],{"id":92,"slug":93,"title":94,"created_at":95},"f18dbadb-8c59-4723-84a4-6ad22746c77a","deepmind-bets-on-continuous-learning-ai-2026-zh","DeepMind 押注 2026 連續學習 AI","2026-03-26T08:16:02.367355+00:00",{"id":97,"slug":98,"title":99,"created_at":100},"f4a106cb-02a6-4508-8f39-9720a0a93cee","ml-papers-of-the-week-github-research-desk-zh","每週 ML 論文清單，為何紅到 GitHub","2026-03-27T01:11:39.284175+00:00",{"id":102,"slug":103,"title":104,"created_at":105},"c4f807ca-4e5f-47f1-a48c-961cf3fc44dc","ai-ml-conferences-to-watch-in-2026-zh","2026 AI 研討會投稿時程整理","2026-03-27T01:51:53.874432+00:00",{"id":107,"slug":108,"title":109,"created_at":110},"9f50561b-aebd-46ba-94a8-363198aa7091","openclaw-agents-manipulated-self-sabotage-zh","OpenClaw Agent 會自己搞砸自己","2026-03-28T03:03:18.786425+00:00",{"id":112,"slug":113,"title":114,"created_at":115},"11f22e92-7066-4978-a544-31f5f2156ec6","vega-learning-to-drive-with-natural-language-instructions-zh","Vega：使用自然語言指示進行自駕車控制","2026-03-28T14:54:04.847912+00:00",{"id":117,"slug":118,"title":119,"created_at":120},"a4c7cfec-8d0e-4fec-93cf-1b9699a530b8","drive-my-way-en-zh","Drive My Way：個性化自駕車風格的實現","2026-03-28T14:54:26.207495+00:00",{"id":122,"slug":123,"title":124,"created_at":125},"dec02f89-fd39-41ba-8e4d-11ede93a536d","training-knowledge-bases-with-writeback-rag-zh","用 WriteBack-RAG 強化知識庫提升檢索效能","2026-03-28T14:54:45.775606+00:00",{"id":127,"slug":128,"title":129,"created_at":130},"3886be5c-a137-40cc-b9e2-0bf18430c002","packforcing-efficient-long-video-generation-method-zh","PackForcing：短影片訓練也能生成長影片","2026-03-28T14:55:02.688141+00:00",{"id":132,"slug":133,"title":134,"created_at":135},"72b90667-d930-4cc9-8ced-aaa0f8968d44","pixelsmile-toward-fine-grained-facial-expression-editing-zh","PixelSmile：提升精細臉部表情編輯的新方法","2026-03-28T14:55:20.678181+00:00",{"id":137,"slug":138,"title":139,"created_at":140},"cf046742-efb2-4753-aef9-caed5da5e32e","adaptive-block-scaled-data-types-zh","IF4：神經網路量化的聰明選擇","2026-03-31T06:00:36.990273+00:00"]