[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"article-claude-code-setup-guide-researchers-en":3,"tags-claude-code-setup-guide-researchers-en":32,"related-lang-claude-code-setup-guide-researchers-en":43,"related-posts-claude-code-setup-guide-researchers-en":47,"series-tools-1e2b9ee3-510a-4d07-855a-766d0be9874c":84},{"id":4,"title":5,"content":6,"summary":7,"source":8,"source_url":9,"author":10,"image_url":11,"keywords":12,"language":18,"translated_content":10,"views":19,"is_premium":20,"created_at":21,"updated_at":21,"cover_image":11,"published_at":22,"rewrite_status":23,"rewrite_error":10,"rewritten_from_id":24,"slug":25,"category":26,"related_article_id":27,"status":28,"google_indexed_at":29,"x_posted_at":10,"tweet_text":10,"title_rewritten_at":30,"title_original":31,"key_takeaways":10,"topic_cluster_id":10,"embedding":10,"is_canonical_seed":20},"1e2b9ee3-510a-4d07-855a-766d0be9874c","Claude Code Setup Guide for Research Workflows","\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.anthropic.com\u002Fclaude-code\" target=\"_blank\" rel=\"noopener\">Claude Code\u003C\u002Fa> runs in your terminal, not inside a browser tab, and that changes the way you work. Paul Goldsmith-Pinkham says he started using it heavily last fall, after a year or two of experimenting with AI coding tools, and his workflow speed jumped.\u003C\u002Fp>\u003Cp>That matters for empirical research because the boring parts of coding are often the slowest parts: cleaning data, fixing scripts, checking outputs, and rewriting the same analysis after a small bug. \u003Ca href=\"\u002Fnews\u002Frtk-cuts-claude-code-token-spend-en\">Claude Code\u003C\u002Fa> is built to sit next to your project files, read them, edit them, and run commands directly on your machine.\u003C\u002Fp>\u003Cp>If you do research with code, this is the kind of tool you should understand even if you do not plan to use it every day. The gap between browser chat and terminal agents is bigger than it looks, and the practical details decide whether the tool feels useful or annoying.\u003C\u002Fp>\u003Ch2>Why Claude Code changes the workflow\u003C\u002Fh2>\u003Cp>\u003Ca href=\"\u002Fnews\u002Fclaude-code-march-2026-update-fixes-bugs-en\">Claude Code\u003C\u002Fa> is an AI assistant that lives inside your shell. You type a task in natural language, and it can inspect files, write code, execute scripts, and keep going without you copying text between windows.\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775058381471-uefp.png\" alt=\"Claude Code Setup Guide for Researchers\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>That is the main difference from using \u003Ca href=\"https:\u002F\u002Fchat.openai.com\u002F\" target=\"_blank\" rel=\"noopener\">ChatGPT\u003C\u002Fa> or \u003Ca href=\"https:\u002F\u002Fclaude.ai\" target=\"_blank\" rel=\"noopener\">Claude\u003C\u002Fa> in a browser. A browser chat can help you think through code, but it cannot see your local project tree unless you paste it in. \u003Ca href=\"\u002Fnews\u002Fopenai-codex-plugin-claude-code-en\">Claude Code\u003C\u002Fa> can inspect your working directory, which means it can follow the structure of your repo, understand your filenames, and act on the files already on disk.\u003C\u002Fp>\u003Cp>Goldsmith-Pinkham frames it like having a capable research assistant sitting at your desk. That comparison is useful because the tool is strongest when you give it a bounded task and enough context to work through the steps on its own.\u003C\u002Fp>\u003Cul>\u003Cli>It can read local files without manual copy-paste.\u003C\u002Fli>\u003Cli>It can run scripts and show terminal output.\u003C\u002Fli>\u003Cli>It can edit code in place inside your project.\u003C\u002Fli>\u003Cli>It can keep a plan going across multiple turns.\u003C\u002Fli>\u003C\u002Ful>\u003Cp>The practical upside is faster iteration. You spend less time on mechanical work and more time checking whether the result is actually correct. For researchers, that can mean moving from idea to result in a much shorter loop.\u003C\u002Fp>\u003Cp>Goldsmith-Pinkham also makes a point that gets lost in the hype: the value is often in execution, not idea generation. The model may not hand you a brilliant new hypothesis, but it can help you test more ideas, clean more data, and debug more quickly.\u003C\u002Fp>\u003Ch2>The setup path and pricing\u003C\u002Fh2>\u003Cp>Installing Claude Code is straightforward. If you already have Node.js, you can install it with \u003Ca href=\"https:\u002F\u002Fwww.npmjs.com\u002Fpackage\u002F@anthropic-ai\u002Fclaude-code\" target=\"_blank\" rel=\"noopener\">npm\u003C\u002Fa>. Anthropic also offers a standalone installer for \u003Ca href=\"https:\u002F\u002Fdocs.anthropic.com\u002Fen\u002Fdocs\u002Fclaude-code\" target=\"_blank\" rel=\"noopener\">Claude Code documentation\u003C\u002Fa>, with support for Mac, Linux, and Windows through WSL.\u003C\u002Fp>\u003Cp>Once it is installed, you move into a project folder and run \u003Ccode>claude\u003C\u002Fcode>. After that, you authenticate with either a subscription or an API key. Goldsmith-Pinkham says the pricing tiers are Pro at $20 per month, Max at $100 per month, and Max 20x at $200 per month.\u003C\u002Fp>\u003Cp>His advice is simple: start with the $20 or $100 plan unless you already know you will burn through a large amount of usage. He pays for Max because he uses it heavily, but he does not think most people need the top tier.\u003C\u002Fp>\u003Cblockquote>“If you’re already paying $20\u002Fmonth for Claude’s chat functionality, you already have access to Claude Code—try it out right now.”\u003C\u002Fblockquote>\u003Cp>That is a pretty strong nudge, and it is hard to argue with the economics. If you already have a Claude subscription, the marginal cost of trying the terminal tool can be close to zero.\u003C\u002Fp>\u003Cp>There is one security detail worth taking seriously. Your files stay local, but if Claude reads them, that content is sent through Anthropic’s API as context. Goldsmith-Pinkham’s rule is practical: if you would not put it on Dropbox, do not put it in front of Claude.\u003C\u002Fp>\u003Cul>\u003Cli>Pro costs $20 per month.\u003C\u002Fli>\u003Cli>Max costs $100 per month.\u003C\u002Fli>\u003Cli>Max 20x costs $200 per month.\u003C\u002Fli>\u003Cli>Mac, Linux, and Windows via WSL are supported.\u003C\u002Fli>\u003C\u002Ful>\u003Cp>That makes Claude Code easy to try, but it also means you need judgment. IRB data, PII, passwords, and HIPAA-grade material should stay away from it unless you have a controlled environment and a clear policy for handling sensitive data.\u003C\u002Fp>\u003Ch2>Context windows are the real limit\u003C\u002Fh2>\u003Cp>The most useful technical concept in the article is the context window. Every turn in a Claude Code session includes your prompts, the model’s replies, file reads, tool calls, and outputs. The model reads that whole bundle each time it responds.\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775058395104-6qvi.png\" alt=\"Claude Code Setup Guide for Researchers\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>For Claude, Goldsmith-Pinkham says the context window is around 200,000 tokens. That sounds huge until you start reading files, pasting logs, and asking for several rounds of edits. Then it fills up faster than you expect.\u003C\u002Fp>\u003Cp>Once a session gets long, quality tends to slip. The model starts missing earlier decisions, forgetting constraints, or drifting away from the original goal. That is why long, messy sessions often feel worse than fresh ones.\u003C\u002Fp>\u003Cul>\u003Cli>Claude’s context window is about 200,000 tokens.\u003C\u002Fli>\u003Cli>Long sessions with many turns tend to degrade.\u003C\u002Fli>\u003Cli>File reads and tool output count toward the limit.\u003C\u002Fli>\u003Cli>Compaction compresses history into a shorter summary.\u003C\u002Fli>\u003C\u002Ful>\u003Cp>Claude Code can auto-compact when the session gets too full, but Goldsmith-Pinkham recommends doing it intentionally. You can trigger it with \u003Ccode>\u002Fcompact\u003C\u002Fcode>, and you can even tell it what to remember.\u003C\u002Fp>\u003Cp>His better habit is to write progress to disk. Ask Claude to save a summary of what it has done, what decisions it made, and what remains. Then start a fresh session and load that file. You get a clean context window and a durable record of the work.\u003C\u002Fp>\u003Cp>That is the kind of workflow detail that sounds small until you use it for real. For research work, state on disk is safer than state in chat. It also makes your analysis easier to review later.\u003C\u002Fp>\u003Ch2>Terminal setup matters more than you think\u003C\u002Fh2>\u003Cp>Goldsmith-Pinkham recommends spending a little time on the terminal itself, because Claude Code is only as pleasant as the shell around it. He suggests \u003Ca href=\"https:\u002F\u002Fghostty.org\" target=\"_blank\" rel=\"noopener\">Ghostty\u003C\u002Fa> for a fast, GPU-accelerated terminal and \u003Ca href=\"https:\u002F\u002Fzellij.dev\" target=\"_blank\" rel=\"noopener\">Zellij\u003C\u002Fa> for terminal multiplexing with split panes.\u003C\u002Fp>\u003Cp>That advice is practical, not flashy. If you are going to spend hours inside a terminal agent, you want a setup that makes it easy to see what the model is doing and what your files look like at the same time.\u003C\u002Fp>\u003Cp>He describes the split-pane workflow as especially helpful: Claude Code on one side, file output or logs on the other. That makes it easier to catch mistakes early and keep the session focused on the current task.\u003C\u002Fp>\u003Cul>\u003Cli>Ghostty is fast and renders large outputs well.\u003C\u002Fli>\u003Cli>Zellij makes split panes easy to use.\u003C\u002Fli>\u003Cli>Side-by-side views help during debugging.\u003C\u002Fli>\u003Cli>Better terminal ergonomics reduce friction in long sessions.\u003C\u002Fli>\u003C\u002Ful>\u003Cp>For researchers, this is where the tool starts to feel less like a novelty and more like part of the workbench. The better your terminal setup, the less mental overhead you spend on the interface itself.\u003C\u002Fp>\u003Cp>Goldsmith-Pinkham also points readers to a broader ladder of AI coding tools, from browser chat to \u003Ca href=\"https:\u002F\u002Fcursor.com\" target=\"_blank\" rel=\"noopener\">Cursor\u003C\u002Fa>, then to terminal agents like Claude Code, \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fopenai\u002Fcodex\" target=\"_blank\" rel=\"noopener\">OpenAI Codex\u003C\u002Fa>, \u003Ca href=\"https:\u002F\u002Faistudio.google.com\u002Fapps\u002Fcli\" target=\"_blank\" rel=\"noopener\">Gemini CLI\u003C\u002Fa>, and open-source options like \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fopen-code-ai\u002Fopen-code\" target=\"_blank\" rel=\"noopener\">Open Code\u003C\u002Fa>.\u003C\u002Fp>\u003Cp>That comparison is useful because it shows where Claude Code sits. It is not the only option, and it is not magic. It is one strong entry in a class of tools that all aim to let an LLM work closer to your actual codebase.\u003C\u002Fp>\u003Ch2>What researchers should do next\u003C\u002Fh2>\u003Cp>The clean takeaway is that Claude Code is worth trying if you write code for empirical work, especially if you spend too much time on repetitive edits, debugging, or reformatting. Start with a small project, keep sessions short, and write progress to files instead of trusting the chat history.\u003C\u002Fp>\u003Cp>If you are handling sensitive data, keep the project fenced off. If you are not sure whether a dataset is safe to expose to an API-backed tool, treat it as unsafe until you have a clear policy.\u003C\u002Fp>\u003Cp>My prediction is simple: the researchers who get the most value from terminal agents will not be the ones who ask for the biggest prompts. They will be the ones who build habits around short sessions, saved state, and clear project boundaries. The question is whether your workflow is ready for that shift now, or whether you will wait until the next paper deadline forces the issue.\u003C\u002Fp>\u003Cp>For a broader view of terminal-based AI tools, see our guide to \u003Ca href=\"\u002Fnews\u002Fterminal-ai-coding-agents-guide\">terminal AI coding agents\u003C\u002Fa> and our breakdown of \u003Ca href=\"\u002Fnews\u002Fai-coding-tools-for-research\">AI coding tools for research\u003C\u002Fa>.\u003C\u002Fp>","A practical setup guide for Claude Code in research workflows, with terminal tips, context-window advice, and pricing details.","paulgp.substack.com","https:\u002F\u002Fpaulgp.substack.com\u002Fp\u002Fgetting-started-with-claude-code",null,"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775058381471-uefp.png",[13,14,15,16,17],"Claude Code","AI coding agents","research workflow","terminal tools","context window","en",2,false,"2026-04-01T13:15:40.740947+00:00","2026-04-01T13:15:40.713+00:00","done","426f9984-bb2f-4aed-8691-10da00a734fd","claude-code-setup-guide-researchers-en","tools","a1f4d887-c321-4cc3-b406-ba4c0c019ebf","published","2026-04-09T09:00:53.295+00:00","2026-05-09T09:16:57.861+00:00","Claude Code Setup Guide for Researchers",[33,35,37,39,41],{"name":13,"slug":34},"claude-code",{"name":17,"slug":36},"context-window",{"name":14,"slug":38},"ai-coding-agents",{"name":16,"slug":40},"terminal-tools",{"name":15,"slug":42},"research-workflow",{"id":27,"slug":44,"title":45,"language":46},"claude-code-setup-guide-researchers-zh","Claude Code 研究者安裝指南","zh",[48,54,60,66,72,78],{"id":49,"slug":50,"title":51,"cover_image":52,"image_url":52,"created_at":53,"category":26},"a6c1d84d-0d9c-4a5a-9ca0-960fbfc1412e","why-gemini-api-pricing-is-cheaper-than-it-looks-en","Why Gemini API pricing is cheaper than it looks","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778869846824-s2r1.png","2026-05-15T18:30:26.595941+00:00",{"id":55,"slug":56,"title":57,"cover_image":58,"image_url":58,"created_at":59,"category":26},"8b02abfa-eb16-4853-8b15-63d302c7b587","why-vidhub-huiyuan-hutong-bushi-quan-shebei-tongyong-en","Why VidHub 会员互通不是“买一次全设备通用”","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778789439875-uceq.png","2026-05-14T20:10:26.046635+00:00",{"id":61,"slug":62,"title":63,"cover_image":64,"image_url":64,"created_at":65,"category":26},"abe54a57-7461-4659-b2a0-99918dfd2a33","why-buns-zig-to-rust-experiment-is-right-en","Why Bun’s Zig-to-Rust experiment is the right move","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778767895201-5745.png","2026-05-14T14:10:29.298057+00:00",{"id":67,"slug":68,"title":69,"cover_image":70,"image_url":70,"created_at":71,"category":26},"f0015918-251b-43d7-95af-032d2139f3f6","why-openai-api-pricing-is-product-strategy-en","Why OpenAI API pricing is a product strategy, not a footnote","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778749841805-uyhg.png","2026-05-14T09:10:27.921211+00:00",{"id":73,"slug":74,"title":75,"cover_image":76,"image_url":76,"created_at":77,"category":26},"7096dab0-6d27-42d9-b951-7545a5dddf33","why-claude-code-prompt-design-beats-ide-copilots-en","Why Claude Code’s prompt design beats IDE copilots","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778742651754-3kxk.png","2026-05-14T07:10:30.953808+00:00",{"id":79,"slug":80,"title":81,"cover_image":82,"image_url":82,"created_at":83,"category":26},"1f1bff1e-0ebc-4fa7-a078-64dc4b552548","why-databricks-model-serving-is-right-default-en","Why Databricks Model Serving is the right default for production infe…","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778692290314-gopj.png","2026-05-13T17:10:32.167576+00:00",[85,90,95,100,105,110,115,120,125,130],{"id":86,"slug":87,"title":88,"created_at":89},"8008f1a9-7a00-4bad-88c9-3eedc9c6b4b1","surepath-ai-mcp-policy-controls-en","SurePath AI's New MCP Policy Controls Enhance AI Security","2026-03-26T01:26:52.222015+00:00",{"id":91,"slug":92,"title":93,"created_at":94},"27e39a8f-b65d-4f7b-a875-859e2b210156","mcp-standard-ai-tools-2026-en","MCP Standard in 2026: Integrating AI Tools","2026-03-26T01:27:43.127519+00:00",{"id":96,"slug":97,"title":98,"created_at":99},"165f9a19-c92d-46ba-b3f0-7125f662921d","rag-2026-transforming-enterprise-ai-en","How RAG in 2026 is Transforming Enterprise AI","2026-03-26T01:28:11.485236+00:00",{"id":101,"slug":102,"title":103,"created_at":104},"6a2a8e6e-b956-49d8-be12-cc47bdc132b2","mastering-ai-prompts-2026-guide-en","Mastering AI Prompts: A 2026 Guide for Developers","2026-03-26T01:29:07.835148+00:00",{"id":106,"slug":107,"title":108,"created_at":109},"d6653030-ee6d-4043-898d-d2de0388545b","evolving-world-prompt-engineering-en","The Evolving World of Prompt Engineering","2026-03-26T01:29:42.061205+00:00",{"id":111,"slug":112,"title":113,"created_at":114},"3ab2c67e-4664-4c67-a013-687a2f605814","garry-tan-open-sources-claude-code-toolkit-en","Garry Tan Open-Sources a Claude Code Toolkit","2026-03-26T08:26:20.245934+00:00",{"id":116,"slug":117,"title":118,"created_at":119},"66a7cbf8-7e76-41d4-9bbf-eaca9761bf69","github-ai-projects-to-watch-in-2026-en","20 GitHub AI Projects to Watch in 2026","2026-03-26T08:28:09.752027+00:00",{"id":121,"slug":122,"title":123,"created_at":124},"231306b3-1594-45b2-af81-bb80e41182f2","claude-code-vs-cursor-2026-en","Claude Code vs Cursor in 2026","2026-03-26T13:27:14.177468+00:00",{"id":126,"slug":127,"title":128,"created_at":129},"9f332fda-eace-448a-a292-2283951eee71","practical-github-guide-learning-ml-2026-en","A Practical GitHub Guide to Learning ML in 2026","2026-03-27T01:16:50.125678+00:00",{"id":131,"slug":132,"title":133,"created_at":134},"1b1f637d-0f4d-42bd-974b-07b53829144d","aiml-2026-student-ai-ml-lab-repo-review-en","AIML-2026 Is a Bare-Bones Student Lab Repo","2026-03-27T01:21:51.661231+00:00"]