[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"article-build-codebase-aware-ai-pr-reviewer-en":3,"tags-build-codebase-aware-ai-pr-reviewer-en":35,"related-lang-build-codebase-aware-ai-pr-reviewer-en":46,"related-posts-build-codebase-aware-ai-pr-reviewer-en":50,"series-ai-agent-69c66a80-5dc5-46fb-9218-68f0307e399e":87},{"id":4,"title":5,"content":6,"summary":7,"source":8,"source_url":9,"author":10,"image_url":11,"keywords":12,"language":19,"translated_content":10,"views":20,"is_premium":21,"created_at":22,"updated_at":22,"cover_image":11,"published_at":23,"rewrite_status":24,"rewrite_error":10,"rewritten_from_id":25,"slug":26,"category":27,"related_article_id":28,"status":29,"google_indexed_at":30,"x_posted_at":10,"tweet_text":10,"title_rewritten_at":10,"title_original":10,"key_takeaways":31,"topic_cluster_id":10,"embedding":10,"is_canonical_seed":21},"69c66a80-5dc5-46fb-9218-68f0307e399e","How to Build a Codebase-Aware AI PR Reviewer","\u003Cp data-speakable=\"summary\">Set up a codebase-aware AI PR reviewer that catches team-specific review mistakes before humans do.\u003C\u002Fp>\u003Cp>This guide is for tech leads and senior developers who are drowning in clean-looking pull requests and need a practical way to move team memory into the review flow. By the end, you’ll have a repeatable setup for a codebase-aware AI reviewer that checks your project rules, reads the right files, and surfaces the mistakes your team keeps seeing.\u003C\u002Fp>\u003Cp>The approach works whether you use \u003Ca href=\"\u002Fnews\u002Fwhy-claude-code-should-use-deepseek-v4-for-1m-context-en\">Claude Code\u003C\u002Fa>, \u003Ca href=\"\u002Ftag\u002Fcursor\">Cursor\u003C\u002Fa>, Cline, \u003Ca href=\"\u002Fnews\u002Fgithub-copilot-code-review-actions-minutes-en\">GitHub Copilot\u003C\u002Fa>, or a mix. The key outcome is not a shinier model, but a review system that can actually see your architecture, conventions, and migration rules before a human gets pulled in.\u003C\u002Fp>\u003Ch2>Before you start\u003C\u002Fh2>\u003Cul>\u003Cli>A GitHub account with access to the repository you want to review.\u003C\u002Fli>\u003Cli>One AI coding tool account, such as Claude Code, Cursor, Cline, or GitHub Copilot.\u003C\u002Fli>\u003Cli>Node 20+ if you plan to script review commands locally.\u003C\u002Fli>\u003Cli>Git 2.40+ installed on your machine.\u003C\u002Fli>\u003Cli>A repo with at least one existing convention doc, ADR, or architecture note.\u003C\u002Fli>\u003Cli>Permission to add repo-level documentation files like AGENTS.md or CLAUDE.md.\u003C\u002Fli>\u003C\u002Ful>\u003Ch2>Step 1: Map the review misses\u003C\u002Fh2>\u003Cp>Your first goal is to capture the recurring mistakes that humans keep catching late, because those are the rules your reviewer must learn first. Look for patterns such as old middleware paths, duplicate components, layer violations, or literal strings where enums should be used.\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1777979447282-ff5b.png\" alt=\"How to Build a Codebase-Aware AI PR Reviewer\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>Write each miss as a short rule in plain language, then group them by area: auth, UI, backend layering, naming, or migration behavior. This becomes the source material for your reviewer instructions.\u003C\u002Fp>\u003Cp>Verification: you should have a short list of five to ten review rules that describe real mistakes from your own codebase, not generic style advice.\u003C\u002Fp>\u003Ch2>Step 2: Add repo-level memory files\u003C\u002Fh2>\u003Cp>Your goal here is to move team knowledge into files the \u003Ca href=\"\u002Ftag\u002Fagent\">agent\u003C\u002Fa> can read before it reviews code. Start with \u003Ca href=\"https:\u002F\u002Fdocs.anthropic.com\u002Fen\u002Fdocs\u002Fclaude-code\" target=\"_blank\" rel=\"noreferrer\">Claude Code docs\u003C\u002Fa> and the \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fanthropics\u002Fclaude-code\" target=\"_blank\" rel=\"noreferrer\">Claude Code GitHub repo\u003C\u002Fa> if you use Claude, then create a root-level AGENTS.md or CLAUDE.md that explains the rules in concise bullets.\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1777979452774-02v0.png\" alt=\"How to Build a Codebase-Aware AI PR Reviewer\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cpre>\u003Ccode># AGENTS.md\n\n- New API endpoints must use v2 auth middleware.\n- Do not duplicate shared hooks from \u002Fhooks.\n- Controllers must not import repo functions directly.\n- Check \u002Fdesign-system before creating a new UI component.\n- Use enums instead of string literals for status checks.\u003C\u002Fcode>\u003C\u002Fpre>\u003Cp>Keep the language specific and testable. If a rule cannot produce a clear yes-or-no review comment, rewrite it until it can.\u003C\u002Fp>\u003Cp>Verification: you should be able to open the file and point to each rule as something the agent can check during review.\u003C\u002Fp>\u003Ch2>Step 3: Add service-level instruction files\u003C\u002Fh2>\u003Cp>Your goal is to make the reviewer aware of local exceptions and per-service conventions without forcing it to infer them from the whole repo. Create service-specific files beside the code they govern, such as docs in a backend service folder or component notes in a UI package.\u003C\u002Fp>\u003Cp>For example, add a short file in a service directory that says which auth path is canonical, which layer owns orchestration, or which shared component directory must be checked first. This is where migration rules and architecture boundaries belong.\u003C\u002Fp>\u003Cp>Verification: you should be able to open one service folder and find a local instruction file that explains the rules unique to that area.\u003C\u002Fp>\u003Ch2>Step 4: Build the review command\u003C\u002Fh2>\u003Cp>Your goal is to give the AI one repeatable command that performs a read-only, codebase-aware review. The command should load the repo rules, inspect the diff, and ask for findings only where the rules are violated or where the change conflicts with existing patterns.\u003C\u002Fp>\u003Cp>If you use a local script, keep it simple: pass the diff, include the relevant memory files, and ask for a structured output with file, line, issue, and rationale. Avoid letting the model rewrite code during review.\u003C\u002Fp>\u003Cpre>\u003Ccode>node scripts\u002Freview-pr.js --base origin\u002Fmain --head HEAD\u003C\u002Fcode>\u003C\u002Fpre>\u003Cp>Verification: you should get a review output that names concrete files and points to specific rule violations instead of giving generic praise or vague suggestions.\u003C\u002Fp>\u003Ch2>Step 5: Run the reviewer on a real pull request\u003C\u002Fh2>\u003Cp>Your goal is to test the setup against a real PR that touches a known sensitive area, such as auth, shared UI, or backend layering. Use a recent change that a human reviewer already understands well, so you can compare the AI output with the actual team rule.\u003C\u002Fp>\u003Cp>Check whether the reviewer catches the same issues a senior engineer would catch from memory. If it misses something important, add that missing rule to AGENTS.md or the relevant service file and rerun the review.\u003C\u002Fp>\u003Cp>Verification: you should see at least one meaningful, team-specific comment that a generic reviewer would likely miss.\u003C\u002Fp>\u003Ch2>Step 6: Tighten the loop with human feedback\u003C\u002Fh2>\u003Cp>Your goal is to make the reviewer improve every time a human edits its output. After each real PR, record false positives, missed issues, and any rule that was too vague to help.\u003C\u002Fp>\u003Cp>Then update the memory files so the next review is better. The compounding effect is the real win: each human correction becomes a permanent part of the reviewer’s context.\u003C\u002Fp>\u003Cp>Verification: you should notice fewer repeated review comments and more first-pass catches on the same class of mistakes.\u003C\u002Fp>\u003Ctable>\u003Cthead>\u003Ctr>\u003Cth>Metric\u003C\u002Fth>\u003Cth>Before\u002FBaseline\u003C\u002Fth>\u003Cth>After\u002FResult\u003C\u002Fth>\u003C\u002Ftr>\u003C\u002Fthead>\u003Ctbody>\u003Ctr>\u003Ctd>Review bottleneck\u003C\u002Ftd>\u003Ctd>Senior reviewer was the only source of team memory\u003C\u002Ftd>\u003Ctd>Memory moved into repo files the AI can read\u003C\u002Ftd>\u003C\u002Ftr>\u003Ctr>\u003Ctd>Generic review quality\u003C\u002Ftd>\u003Ctd>Missed codebase-specific rules\u003C\u002Ftd>\u003Ctd>Catches auth, layering, and shared-component violations\u003C\u002Ftd>\u003C\u002Ftr>\u003Ctr>\u003Ctd>Review consistency\u003C\u002Ftd>\u003Ctd>Depends on who is available\u003C\u002Ftd>\u003Ctd>Repeatable command with stable instructions\u003C\u002Ftd>\u003C\u002Ftr>\u003C\u002Ftbody>\u003C\u002Ftable>\u003Ch2>Common mistakes\u003C\u002Fh2>\u003Cul>\u003Cli>Writing rules that are too broad. Fix: rewrite them as testable statements, such as “controllers must not import repo functions directly.”\u003C\u002Fli>\u003Cli>Hiding important guidance in chat threads. Fix: move it into AGENTS.md, CLAUDE.md, or a service-level instruction file that lives in the repo.\u003C\u002Fli>\u003Cli>Letting the reviewer edit code during review. Fix: keep the review command read-only so it only reports findings and does not drift into implementation.\u003C\u002Fli>\u003C\u002Ful>\u003Ch2>What's next\u003C\u002Fh2>\u003Cp>Once the reviewer is working on one repository, extend the same pattern to other services, then add deeper follow-ups like PR templates, architecture decision records, and automated checks for the rules that never should have been tribal knowledge in the first place.\u003C\u002Fp>","Set up a codebase-aware AI PR reviewer that catches team-specific review mistakes before humans do.","www.freecodecamp.org","https:\u002F\u002Fwww.freecodecamp.org\u002Fnews\u002Fhow-to-unblock-ai-pr-review-bottleneck-handbook",null,"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1777979447282-ff5b.png",[13,14,15,16,17,18],"AI code review","GitHub","Claude Code","Cursor","AGENTS.md","codebase context","en",2,false,"2026-05-05T11:10:30.255682+00:00","2026-05-05T11:10:30.236+00:00","done","7e012257-975c-4fc5-b250-1dc165364e19","build-codebase-aware-ai-pr-reviewer-en","ai-agent","15ed5c11-4f9e-495d-9109-4cf1ba19e013","published","2026-05-06T09:00:22.268+00:00",[32,33,34],"Team-specific review rules belong in the repo, not in one reviewer’s head.","A read-only, repeatable review command is easier to trust than a generic AI assistant.","Every missed comment is a new rule you can add to the reviewer’s memory files.",[36,38,40,42,44],{"name":16,"slug":37},"cursor",{"name":17,"slug":39},"agentsmd",{"name":14,"slug":41},"github",{"name":15,"slug":43},"claude-code",{"name":13,"slug":45},"ai-code-review",{"id":28,"slug":47,"title":48,"language":49},"build-codebase-aware-ai-pr-reviewer-zh","怎麼做具備程式碼庫知識的 AI PR 審查器","zh",[51,57,63,69,75,81],{"id":52,"slug":53,"title":54,"cover_image":55,"image_url":55,"created_at":56,"category":27},"fda44d24-7baf-4d91-a7f9-bbfecae20a27","switch-ai-outputs-markdown-to-html-en","How to Switch AI Outputs from Markdown to HTML","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778743249827-wmsr.png","2026-05-14T07:20:22.631724+00:00",{"id":58,"slug":59,"title":60,"cover_image":61,"image_url":61,"created_at":62,"category":27},"064275f5-4282-47c3-8e4a-60fe8ac99246","anthropic-cat-wu-proactive-ai-assistants-en","Anthropic’s Cat Wu on proactive AI assistants","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778735465548-a92i.png","2026-05-14T05:10:31.723441+00:00",{"id":64,"slug":65,"title":66,"cover_image":67,"image_url":67,"created_at":68,"category":27},"423ac8ad-2886-42a9-8dd8-78e5d43a1574","how-to-run-hermes-agent-on-discord-en","How to Run Hermes Agent on Discord","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778724656141-i30t.png","2026-05-14T02:10:35.727086+00:00",{"id":70,"slug":71,"title":72,"cover_image":73,"image_url":73,"created_at":74,"category":27},"776a562c-99a6-4a6b-93a0-9af40300f3f2","why-ragflow-is-the-right-open-source-rag-engine-to-self-host-en","Why RAGFlow is the right open-source RAG engine to self-host","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778674254587-0pxn.png","2026-05-13T12:10:25.721583+00:00",{"id":76,"slug":77,"title":78,"cover_image":79,"image_url":79,"created_at":80,"category":27},"322ec8bc-61d3-4c80-bb9e-a19941e137c6","how-to-add-temporal-rag-in-production-en","How to Add Temporal RAG in Production","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778667085221-0mox.png","2026-05-13T10:10:31.619892+00:00",{"id":82,"slug":83,"title":84,"cover_image":85,"image_url":85,"created_at":86,"category":27},"1c09aef7-24bc-4d3a-b6cb-426b1012f432","github-agentic-workflows-ai-github-actions-en","GitHub Agentic Workflows puts AI agents in Actions","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778551887736-7b7l.png","2026-05-12T02:11:07.184824+00:00",[88,93,98,103,108,113,118,123,128,133],{"id":89,"slug":90,"title":91,"created_at":92},"03db8de8-8dc2-4ac1-9cf7-898782efbb1f","anthropic-claude-ai-agent-task-automation-en","Anthropic's Claude AI Agent: A New Era of Task Automation","2026-03-25T16:25:06.513026+00:00",{"id":94,"slug":95,"title":96,"created_at":97},"045d1abc-190d-4594-8c95-91e2a26f0c5a","googles-2026-ai-agent-report-decoded-en","Google’s 2026 AI Agent Report, Decoded","2026-03-26T11:15:23.046616+00:00",{"id":99,"slug":100,"title":101,"created_at":102},"e64aba21-254b-4f93-aa21-837484bb52ec","kimi-k25-review-stronger-still-not-legend-en","Kimi K2.5 review: stronger, still not a legend","2026-03-27T07:15:55.385951+00:00",{"id":104,"slug":105,"title":106,"created_at":107},"30dfb781-a1b2-4add-aebe-b3df40247c37","claude-code-controls-mac-desktop-en","Claude Code now controls your Mac desktop","2026-03-28T03:01:59.384091+00:00",{"id":109,"slug":110,"title":111,"created_at":112},"254405b6-7833-4800-8e13-f5196deefbe6","cloudflare-100x-faster-ai-agent-sandbox-en","Cloudflare’s 100x Faster AI Agent Sandbox","2026-03-28T03:09:44.356437+00:00",{"id":114,"slug":115,"title":116,"created_at":117},"04f29b7f-9b91-4306-89a7-97d725e6e1ba","openai-backs-isara-agent-swarm-bet-en","OpenAI backs Isara’s agent-swarm bet","2026-03-28T03:15:27.849766+00:00",{"id":119,"slug":120,"title":121,"created_at":122},"3b0bf479-e4ae-4703-9666-721a7e0cdb91","openai-plan-automated-ai-researcher-en","OpenAI’s plan for an automated AI researcher","2026-03-28T03:17:42.312819+00:00",{"id":124,"slug":125,"title":126,"created_at":127},"fe91bce0-b85d-4efa-a207-24ae9939c29f","harness-engineering-ai-agent-reliability-2026","Harness Engineering: From Bridle to Operating System, The Missing Link in AI Agent Reliability","2026-03-31T06:36:55.648751+00:00",{"id":129,"slug":130,"title":131,"created_at":132},"67dc66da-ca46-4aa5-970b-e997a39fe109","openai-codex-plugin-claude-code-en","OpenAI puts Codex inside Claude Code","2026-04-01T09:21:55.381386+00:00",{"id":134,"slug":135,"title":136,"created_at":137},"7a09007d-820f-43b3-8607-8ad1bfcb94c8","mcp-explained-from-prompts-to-production-en","MCP Explained: From Prompts to Production","2026-04-01T09:24:40.089177+00:00"]