How to Use Gemini CLI in VS Code: Setup, IDE Integration, and Coding Workflows

Coding Liquids blog cover featuring Sagnik Bhattacharya for How to Use Gemini CLI in VS Code, showing terminal-based AI coding assistant integrated with VS Code editor.
Coding Liquids blog cover featuring Sagnik Bhattacharya for How to Use Gemini CLI in VS Code, showing terminal-based AI coding assistant integrated with VS Code editor.

GitHub Copilot costs $19 per month. Claude Code requires a Max subscription. These are excellent tools — I use both and recommend them in my courses — but Google has released something that deserves serious attention from developers who want a powerful AI coding assistant without paying anything at all. Gemini CLI is a free, open-source, terminal-based AI tool built by Google that gives you 1,000 requests per day with your personal Google account. No API key. No credit card. No trial period that expires after 14 days. Just sign in with Google and start prompting.

What makes Gemini CLI particularly interesting for VS Code users is the companion extension that Google released alongside the CLI. This extension bridges the gap between the terminal and the IDE, adding a status bar indicator, an output panel, and lightweight inline suggestion support directly inside VS Code. I have spent the past few weeks integrating Gemini CLI into my daily workflow, and this guide covers everything you need to get it running — from initial installation through to practical coding examples and an honest comparison with the paid alternatives.

Prerequisites

Before installing Gemini CLI, make sure you have the following:

  1. Node.js 18 or later. Gemini CLI is distributed as an npm package, so you need a current version of Node.js. Run node --version in your terminal to check. If you are below v18, download the latest LTS from nodejs.org.
  2. A Google account. Any personal Google account works. You do not need a Google Cloud project, a Workspace account, or a billing-enabled API key. The free tier of 1,000 requests per day is tied to your standard Google login.
  3. VS Code installed. Any recent stable version works. I recommend keeping it updated to the latest release for the best extension compatibility.

That is it. Unlike local AI tools that require powerful GPUs and multi-gigabyte model downloads, Gemini CLI runs against Google's cloud infrastructure. Your hardware does not matter beyond being able to run Node.js and VS Code.

Installing Gemini CLI

Open any terminal — the VS Code integrated terminal works perfectly — and run:

npm install -g @anthropic-ai/gemini-cli

Wait — that is wrong, and I left it in deliberately to make a point. There are multiple AI CLI tools circulating now and it is easy to confuse them. The correct command is:

npm install -g @google/gemini-cli

Once the installation finishes, verify it by running:

gemini --version

You should see a version number printed to the terminal. Now authenticate with your Google account:

gemini auth login

This opens a browser window where you sign in with your Google account and grant Gemini CLI the necessary permissions. Once authenticated, the CLI stores your credentials locally, and you will not need to sign in again unless you explicitly log out or your token expires.

To confirm everything is working, run a quick test:

gemini "What version of Node.js am I running?"

Gemini CLI will read your environment context and respond. If you get a coherent answer, your installation is complete.

The VS Code Companion Extension

Google released the Gemini CLI Companion extension on the VS Code marketplace specifically for developers who use Gemini CLI from within the VS Code integrated terminal. This is not a full AI coding assistant like Copilot — it is a lighter-weight integration that enhances the CLI experience inside the editor.

Installing the Extension

  1. Open VS Code and go to the Extensions panel (Ctrl+Shift+X).
  2. Search for "Gemini CLI" or "Gemini CLI Companion".
  3. Install the extension published by Google.
  4. Reload VS Code if prompted.

What the Extension Adds

The companion extension provides three main features:

  • Status bar indicator. A small Gemini icon appears in the VS Code status bar showing your current authentication state and daily request count. This is surprisingly useful — knowing you have used 340 of your 1,000 daily requests helps you pace yourself during intensive coding sessions.
  • Output panel integration. When you run Gemini CLI commands in the integrated terminal, the extension captures the output and mirrors it in a dedicated "Gemini CLI" output panel. This means you can run a long generation command, switch to another terminal tab, and come back to the output panel to review the results without scrolling through terminal history.
  • Inline suggestions. The extension can pass your current file context to Gemini when you invoke it from the command palette (Ctrl+Shift+P, then "Gemini CLI: Suggest"). It reads the file open in the active editor, sends it along with your prompt, and displays the suggestion inline. This is not the same as Copilot's continuous ghost text — you must explicitly invoke it — but it is a step towards tighter IDE integration.

The extension also registers a set of commands in the command palette: "Gemini CLI: Explain Selection", "Gemini CLI: Refactor Selection", and "Gemini CLI: Generate Tests". These are shortcuts that take the currently selected code, construct a prompt, and pass it to the CLI. The results appear in the output panel rather than being inserted directly into the code, which keeps you in control of what actually goes into your files.

Terminal Workflows Inside VS Code

The real power of Gemini CLI is in the terminal. The companion extension is helpful, but the CLI itself is where you do the heavy lifting. Here is how I use it from the VS Code integrated terminal in a typical development session.

Interactive Mode vs Single-Shot Commands

Gemini CLI supports two modes. Single-shot mode is what you saw earlier — gemini "your prompt here" — where you send a prompt and get a response. Interactive mode starts a persistent session:

gemini

Running gemini without arguments drops you into an interactive chat session. This is the mode I use most often because it maintains conversation context. You can ask a question, get a response, then follow up without re-explaining the entire context. The session persists until you type /exit or close the terminal.

File Context and the @ Syntax

One of Gemini CLI's most useful features is the ability to reference files directly in your prompts using the @ symbol:

gemini "Explain what @src/utils/auth.ts does and identify any security issues"

The CLI reads the referenced file and includes its contents in the prompt sent to Gemini. You can reference multiple files:

gemini "Compare @src/services/old-parser.js and @src/services/new-parser.js. What changed and are there any regressions?"

This works exceptionally well from the VS Code terminal because you can see your project file tree in the sidebar, identify the files you want to discuss, and reference them by path without having to copy-paste their contents.

Piping Output into Gemini

Because Gemini CLI is a standard terminal tool, it plays nicely with Unix pipes and redirects:

git diff HEAD~3 | gemini "Review this diff. Flag any bugs, security issues, or anti-patterns."
npm test 2>&1 | gemini "These tests are failing. Analyse the errors and suggest fixes."
cat package.json | gemini "Are any of these dependencies outdated or known to have vulnerabilities?"

Piping is where Gemini CLI feels genuinely different from GUI-based AI tools. You can integrate it into your existing terminal habits and shell scripts without changing how you work.

Practical Coding Examples

Here are the workflows I use most frequently, with real examples.

Generating Boilerplate

Starting a new Express API route, a React component, or a database migration involves predictable boilerplate. Rather than copying from old files or typing it from memory, I prompt Gemini:

gemini "Generate a TypeScript Express route handler for POST /api/users that validates the request body with Zod, creates a user in a PostgreSQL database using Prisma, and returns the created user. Include error handling for duplicate emails and validation failures."

Gemini CLI outputs the complete file. I review it, copy what I need, and paste it into my editor. The output quality is comparable to what you get from ChatGPT or Claude — Gemini 2.5 Pro (the model powering the CLI) is a strong code generation model, particularly for TypeScript, Python, and Go.

Writing Tests

This is where the file reference syntax shines:

gemini "Write comprehensive unit tests for @src/services/payment.ts using Vitest. Cover the happy path, edge cases (zero amounts, negative values, currency conversion), and error conditions (network failures, invalid card details). Use describe/it blocks and mock external API calls."

Gemini reads the payment service file, understands its interface, and generates a test suite tailored to the actual code. In my testing, it consistently covers the main paths and most edge cases. I typically add one or two additional edge cases manually after reviewing the generated tests, but the time savings compared to writing everything from scratch are substantial.

Debugging Errors

When a stack trace appears in my terminal, I pipe it directly to Gemini:

npm run build 2>&1 | gemini "This build is failing. Explain the root cause and provide the exact fix."

For runtime errors, I combine the error output with the relevant source file:

gemini "I'm getting this error: 'TypeError: Cannot read properties of undefined (reading map)' at line 47 of @src/components/UserList.tsx. What's causing it and how do I fix it?"

Gemini correctly diagnoses most common errors on the first attempt. It is particularly strong with TypeScript type errors, React rendering issues, and database query problems. Where it occasionally struggles is with errors that arise from the interaction between multiple files or complex state management — the same limitation you would encounter with any AI tool that only sees the files you explicitly provide.

Explaining Codebases

When joining a new project or reviewing an unfamiliar pull request, I use Gemini to accelerate my understanding:

gemini "I'm new to this project. Read @README.md, @src/index.ts, and @src/config.ts and give me a high-level architecture overview. What patterns is this codebase using? Where would I add a new API endpoint?"

This produces a clear summary that would otherwise take 30-60 minutes of manual exploration. It is not a replacement for actually reading the code, but it gives you a mental map to navigate by.

Configuration and Customisation

Gemini CLI supports project-level configuration through a .gemini/ directory in your project root.

Settings File

Create .gemini/settings.json in your project root to configure default behaviour:

{
  "model": "gemini-2.5-pro",
  "temperature": 0.2,
  "maxOutputTokens": 8192,
  "systemInstruction": "You are a senior TypeScript developer. Write clean, type-safe code. Prefer functional patterns. Always include error handling. Use British English in comments and documentation."
}

The temperature setting is worth paying attention to. For code generation, I recommend keeping it between 0.1 and 0.3 — lower values produce more predictable, conventional code. For brainstorming or creative problem-solving, you might raise it to 0.5 or 0.7.

Context Files

You can create a .gemini/context.md file that gets automatically included with every prompt. This is ideal for project-specific instructions:

# Project Context

This is a Next.js 15 application using the App Router.
Database: PostgreSQL with Prisma ORM.
Testing: Vitest for unit tests, Playwright for e2e.
Styling: Tailwind CSS v4.
State management: Zustand.

## Coding Standards
- Use server components by default; only add "use client" when necessary.
- All API routes must validate input with Zod.
- Database queries go in /src/services/, not in route handlers directly.
- Error responses follow RFC 7807 Problem Details format.

With this context file in place, every prompt you send automatically includes these project conventions. This dramatically improves the relevance of generated code — Gemini will use Prisma instead of raw SQL, Vitest instead of Jest, and follow your architectural patterns without you having to specify them each time.

Model Selection

By default, Gemini CLI uses Gemini 2.5 Pro, which is the most capable model in the lineup. You can switch models per-prompt if needed:

gemini --model gemini-2.5-flash "Quick: what's the syntax for a TypeScript discriminated union?"

The Flash model responds faster and uses fewer of your daily request tokens for simple questions. I use Pro for code generation and complex reasoning, and Flash for quick syntax lookups and simple explanations.

Gemini CLI vs GitHub Copilot vs Claude Code

This is the comparison that matters most. Here is an honest assessment based on several weeks of using all three side by side.

Feature Gemini CLI GitHub Copilot Claude Code
Price Free (1,000 req/day) $19/month individual Requires Max plan (~$100/month)
Interface Terminal + VS Code companion Full VS Code integration Terminal-based
Inline autocomplete No continuous ghost text Excellent — best in class No inline autocomplete
Code generation quality Very good (Gemini 2.5 Pro) Excellent Excellent (Claude Opus/Sonnet)
Project context awareness File references + context files Indexes full workspace Reads full project tree
Multi-file editing Manual (copy-paste output) Copilot Edits (multi-file) Autonomous multi-file edits
Terminal integration Native — it is a CLI tool Copilot in terminal (limited) Native — it is a CLI tool
Piping and scripting Full Unix pipe support Not applicable Full Unix pipe support
Offline availability No (cloud-based) No (cloud-based) No (cloud-based)
Open source Yes (Apache 2.0) No No
Setup effort Low (npm install + Google sign-in) Minimal (install extension) Low (npm install + Anthropic login)

When to choose Gemini CLI: You want a capable AI coding assistant and you do not want to pay for one. The 1,000 requests per day free tier is generous enough for most individual developers. It is also the right choice if you are already comfortable working in the terminal and prefer a tool that integrates with your existing shell workflows rather than replacing them with a GUI panel.

When to choose Copilot: Inline autocomplete is your top priority. Copilot's ghost text suggestions are still the most seamless coding experience available — you type, suggestions appear, you press Tab. Nothing else matches this for flow state. Copilot is also strongest when you need full workspace indexing and want the AI to understand your entire project without you pointing it to specific files.

When to choose Claude Code: You need an AI tool that can autonomously make multi-file changes, run your tests, and iterate until the code works. Claude Code is the most agentic of the three — it does not just suggest code, it executes commands, reads results, and adjusts. The trade-off is the price and the steeper learning curve.

Limitations and Honest Assessment

Gemini CLI is impressive for a free tool, but it has genuine limitations you should understand before building your workflow around it.

  • No inline autocomplete. This is the single biggest gap. Copilot's ghost text suggestions that appear as you type are the feature most developers associate with "AI coding assistant." Gemini CLI does not offer this. The companion extension's inline suggestions require manual invocation — you must explicitly ask for a suggestion rather than having them appear automatically. For developers who have grown accustomed to Copilot's continuous suggestions, this will feel like a significant downgrade.
  • Rate limits. 1,000 requests per day sounds generous, and for most developers it is. But if you are using Gemini CLI heavily — piping build outputs, generating tests for multiple files, iterating on complex code generation — you can burn through requests faster than you expect. I hit the limit once during an intensive refactoring session where I was generating tests for about 40 service files. The limit resets daily, but it is worth being aware of.
  • Context window constraints. When you reference multiple large files with the @ syntax, you can exceed the model's context window. Gemini 2.5 Pro has a large context window (over 1 million tokens), but very large codebases with many interconnected files can still hit limits. The CLI does not always communicate clearly when context is being truncated.
  • No autonomous file editing. Unlike Claude Code, which can directly create and modify files in your project, Gemini CLI outputs text to the terminal. You must manually copy the generated code into your files. The companion extension helps by capturing output in a panel, but it still requires manual intervention. For large-scale refactoring across many files, this manual step becomes tedious.
  • Internet required. Unlike local models (such as Gemma 4 running through Ollama), Gemini CLI requires an internet connection. If you work in air-gapped environments, on flights without Wi-Fi, or in offices with restricted network access, Gemini CLI will not function.
  • Privacy considerations. Your code is sent to Google's servers for processing. Google states that free-tier prompts may be used to improve their models. If your organisation has strict data handling policies or you are working on proprietary code that cannot leave your network, this is a material concern. Check your company's acceptable use policy before piping production code through any cloud-based AI tool.

Despite these limitations, Gemini CLI is the strongest free AI coding tool available today. The combination of a capable model (Gemini 2.5 Pro), generous free tier, terminal-native workflow, and open-source codebase makes it a serious option for individual developers and a useful complement to paid tools for those who already subscribe to Copilot or Claude.

Frequently Asked Questions

Is Gemini CLI really free, and what are the actual limits?

Yes, Gemini CLI is genuinely free for personal use with a standard Google account. You get 1,000 requests per day, which resets at midnight Pacific Time. Each prompt you send — whether single-shot or within an interactive session — counts as one request. There is no trial period, no credit card required, and no feature gating. The free tier uses Gemini 2.5 Pro, the same model available through the paid API. If you need more than 1,000 requests per day, you can connect Gemini CLI to a Google Cloud API key with billing enabled, which removes the daily cap and charges per token instead.

Can I use Gemini CLI alongside GitHub Copilot in VS Code?

Yes, and this is actually the setup I recommend for developers who already have a Copilot subscription. Keep Copilot active for its inline autocomplete — that continuous ghost text experience is still unmatched. Use Gemini CLI from the integrated terminal for tasks where you want to be more deliberate: generating test suites, explaining unfamiliar code, reviewing diffs, and debugging errors. The two tools complement each other well because they operate in different parts of the interface. Copilot works in the editor; Gemini CLI works in the terminal. There are no conflicts.

How does Gemini CLI compare to running Gemma 4 locally for coding?

They serve different needs. Gemini CLI sends your code to Google's cloud and runs it against Gemini 2.5 Pro, which is significantly more capable than any locally-runnable model. The code quality, reasoning depth, and context window are all substantially better. Gemma 4 running locally through Ollama keeps your code entirely on your machine — nothing is transmitted — and works offline. If privacy and offline access are non-negotiable requirements, Gemma 4 is the right choice. If you want the best code quality from a free tool and are comfortable with cloud processing, Gemini CLI is the stronger option. See my guide to using Gemma 4 in VS Code for the local approach.

Sources and Further Reading

Related Posts

Want to use AI tools more effectively?

My courses cover practical AI workflows, from spreadsheet formulas to app development, with real projects and honest tool comparisons.

Browse all courses