How to Use OpenCode in VS Code: The Free Open-Source AI Coding Alternative

Coding Liquids blog cover featuring Sagnik Bhattacharya for How to Use OpenCode in VS Code, showing terminal-based AI coding assistant integration.
Coding Liquids blog cover featuring Sagnik Bhattacharya for How to Use OpenCode in VS Code, showing terminal-based AI coding assistant integration.

Claude Code costs $20 per month at minimum (with a Max plan), Cursor is $20 per month, and GitHub Copilot sits at $19. These are excellent tools — I use several of them daily and recommend them in my courses. But a growing number of developers want the power of an AI coding assistant without the recurring subscription, the vendor lock-in, or the requirement to send every line of code to a third-party server. If that describes you, OpenCode is the most exciting open-source alternative to land in 2026.

OpenCode is a terminal-based AI coding assistant with over 126,000 GitHub stars, released under the MIT licence. It supports more than 75 models out of the box — including local models through Ollama, cloud providers like OpenAI, Anthropic, and Google, and practically any OpenAI-compatible API endpoint. It runs entirely in your terminal, which means it integrates naturally with VS Code's built-in terminal panel. I have spent the past two weeks setting it up, configuring multiple providers, and testing it across real coding workflows. This guide covers everything you need to get started.

Prerequisites

Before installing OpenCode, make sure the following are in place:

  1. VS Code installed. Any recent stable version works. OpenCode runs in the integrated terminal, so no special extension is required.
  2. Go 1.23+ (optional). If you want to install OpenCode from source using go install, you need Go installed. Alternatively, you can download a pre-built binary or use Homebrew, which does not require Go.
  3. At least one model provider. You need access to at least one AI model. This can be a local Ollama installation (completely free), an OpenAI API key, an Anthropic API key, a Google AI API key, or any OpenAI-compatible endpoint.
  4. Ollama (optional, for local models). If you want to run models entirely on your machine without an internet connection, install Ollama from ollama.com and pull a model such as ollama pull gemma4 or ollama pull deepseek-coder-v2.

If you plan to use only cloud providers (OpenAI, Anthropic, Google), you do not need Go or Ollama — just the pre-built binary and your API key.

Installing OpenCode

OpenCode offers several installation methods. Choose the one that suits your setup:

Method 1: Go Install (Requires Go 1.23+)

If you already have Go installed, this is the simplest approach:

go install github.com/opencode-ai/opencode@latest

This downloads, compiles, and places the opencode binary in your $GOPATH/bin directory. Make sure that directory is in your system PATH.

Method 2: Homebrew (macOS and Linux)

brew install opencode-ai/tap/opencode

This is the easiest method on macOS. Homebrew handles the binary download and PATH configuration automatically.

Method 3: Pre-Built Binary

Visit the OpenCode releases page on GitHub and download the binary for your operating system (Windows, macOS, or Linux). Extract it and place it in a directory that is in your system PATH. On Windows, you can add the directory to your PATH through System Properties or by running:

# Example: add to PATH on Windows (PowerShell)
$env:PATH += ";C:\path\to\opencode"

# Example: add to PATH on macOS/Linux
export PATH="$PATH:/usr/local/bin/opencode"

Verifying the Installation

Open VS Code's integrated terminal (Ctrl+` or Cmd+`) and run:

opencode --version

You should see the version number printed. If you get a "command not found" error, double-check that the binary location is in your PATH.

Connecting to Model Providers

OpenCode's greatest strength is its model flexibility. It supports over 75 models across multiple providers, and you can switch between them freely. Configuration is handled through a .opencode.json file in your project root or through environment variables.

OpenAI

Set your API key as an environment variable:

export OPENAI_API_KEY="sk-your-key-here"

Then in your .opencode.json configuration:

{
  "provider": {
    "openai": {
      "model": "gpt-4o",
      "apiKey": "$OPENAI_API_KEY"
    }
  }
}

OpenCode supports all current OpenAI models including GPT-4o, GPT-4o-mini, and o3. For coding tasks, GPT-4o provides the best balance of quality and speed.

Anthropic

export ANTHROPIC_API_KEY="sk-ant-your-key-here"
{
  "provider": {
    "anthropic": {
      "model": "claude-sonnet-4-20250514",
      "apiKey": "$ANTHROPIC_API_KEY"
    }
  }
}

Claude Sonnet 4 is particularly strong for code generation and refactoring tasks. If you have access to Claude Opus 4, it excels at complex architectural reasoning but is slower and more expensive per token.

Google (Gemini)

export GOOGLE_API_KEY="your-google-api-key"
{
  "provider": {
    "google": {
      "model": "gemini-2.5-pro",
      "apiKey": "$GOOGLE_API_KEY"
    }
  }
}

Ollama (Local Models)

For completely free, offline usage, configure Ollama as your provider:

{
  "provider": {
    "ollama": {
      "model": "gemma4:27b",
      "baseURL": "http://localhost:11434"
    }
  }
}

No API key is needed for Ollama. Make sure the Ollama service is running (ollama serve) and you have pulled the model you want to use. OpenCode communicates with Ollama through its local API endpoint.

Multiple Providers

One of OpenCode's most powerful features is the ability to configure multiple providers simultaneously. You can switch between them during a session without restarting:

{
  "provider": {
    "openai": {
      "model": "gpt-4o",
      "apiKey": "$OPENAI_API_KEY"
    },
    "anthropic": {
      "model": "claude-sonnet-4-20250514",
      "apiKey": "$ANTHROPIC_API_KEY"
    },
    "ollama": {
      "model": "gemma4:27b",
      "baseURL": "http://localhost:11434"
    }
  }
}

This means you can use a free local model for routine tasks and switch to Claude or GPT-4o when you need higher quality output for complex logic — all within the same terminal session.

Using OpenCode in the VS Code Terminal

OpenCode is a terminal application, and VS Code's integrated terminal is the perfect environment for it. Open the terminal panel (Ctrl+` or Cmd+`), navigate to your project directory, and launch OpenCode:

opencode

This starts OpenCode in interactive mode. You are presented with a terminal-based interface where you can type prompts, reference files, and receive AI-generated responses — all without leaving VS Code.

Interactive Mode Basics

Once OpenCode is running, you interact with it conversationally:

  • Ask questions: Type a natural language question and press Enter. OpenCode analyses your project context and responds directly in the terminal.
  • Reference files: Use the @filename syntax to include specific files in your prompt context. For example: Refactor @src/utils/auth.ts to use async/await instead of promises.
  • Generate code: Ask OpenCode to create new files or write functions. It outputs the code directly and can write it to disk with your confirmation.
  • Edit existing code: Describe a change you want, and OpenCode shows a diff view of the proposed modifications. You accept or reject each change.

Keyboard Shortcuts

OpenCode's terminal UI supports several shortcuts that speed up your workflow:

  • Ctrl+C — cancel the current generation
  • Ctrl+L — clear the conversation history and start fresh
  • Tab — autocomplete file paths when using the @ syntax
  • /compact — summarise the current conversation to reduce token usage
  • /model — switch to a different model mid-session

Split Terminal Workflow

A workflow I find highly productive is to split the VS Code terminal into two panes: one running OpenCode and the other for your normal terminal commands (git, npm, running tests). This way, you can ask OpenCode to write a test, then immediately run it in the adjacent pane to verify it passes. Press Ctrl+Shift+5 (or Cmd+Shift+5 on Mac) to split the terminal panel.

Key Features

OpenCode packs a surprising number of features for a terminal-based tool. Here are the ones that matter most for day-to-day development:

Multi-Model Support

As covered in the configuration section, OpenCode supports over 75 models across all major providers. This is its single biggest differentiator. Cursor locks you into their hosted models. Copilot uses GitHub's models. Claude Code requires an Anthropic subscription. OpenCode lets you bring any model you want — including free local models through Ollama. You can even point it at a custom OpenAI-compatible API endpoint if your organisation runs its own inference server.

Conversation Sessions

OpenCode maintains conversation history within a session, so you can iteratively refine code. Ask it to write a function, then follow up with "add error handling", then "write tests for this". Each prompt builds on the previous context. Sessions are persisted locally, so you can resume a conversation after restarting OpenCode.

File Context and Project Awareness

When you launch OpenCode from a project directory, it automatically reads the project structure and can reference files in your codebase. The @filename syntax lets you explicitly include files, but OpenCode also infers relevant context from your prompts. If you ask "fix the authentication bug", it looks for files related to authentication in your project tree.

LSP Integration

OpenCode integrates with Language Server Protocol (LSP) to understand your codebase at a deeper level than raw text. It uses LSP data to resolve type definitions, find references, and understand the relationships between files. This means when you ask it to refactor a function, it can identify all the call sites and update them as well — something a simple text-based tool cannot do reliably.

Diff View

When OpenCode proposes changes to existing files, it presents them in a unified diff format directly in the terminal. You see exactly what lines are being added, removed, or modified before accepting. This is a critical safety feature — you never have to worry about OpenCode silently overwriting code without your review.

Tool Use and Shell Commands

OpenCode can execute shell commands on your behalf (with your approval). This means it can run your test suite, check build output, install dependencies, or perform any terminal operation as part of a coding workflow. For example, you can say "write a unit test for the login function and then run it" — OpenCode writes the test, executes npm test, and adjusts the test if it fails.

Using OpenCode with Local Models (Ollama)

Running OpenCode with Ollama gives you a completely free, offline AI coding assistant. No API keys, no usage limits, no data leaving your machine. Here is how to set it up for the best experience:

Recommended Local Models for Coding

Model Size Min VRAM Coding Quality Best For
Gemma 4 27B 16GB 16GB Very good General coding, explanation, refactoring
DeepSeek Coder V2 16GB 16GB Excellent for code Code generation, completion, debugging
Llama 3.3 70B 40GB 48GB Excellent Complex reasoning, architecture decisions
Gemma 4 12B 8GB 8GB Good Quick tasks on modest hardware
Qwen 2.5 Coder 7B 4.5GB 6GB Good for code Low-end GPUs, fast iteration

Setting Up Ollama with OpenCode

  1. Install Ollama from ollama.com.
  2. Pull a coding model: ollama pull gemma4:27b (or your preferred model from the table above).
  3. Verify Ollama is running: curl http://localhost:11434 should return "Ollama is running".
  4. Create or edit .opencode.json in your project root with the Ollama provider configuration shown earlier.
  5. Launch OpenCode from VS Code's terminal: opencode.

The first prompt may take a few seconds while Ollama loads the model into GPU memory. Subsequent prompts within the same session are faster because the model stays loaded.

Optimising Local Model Performance

  • GPU offloading: Ensure Ollama is using your GPU. Run ollama ps to check. On NVIDIA systems, confirm CUDA is available with nvidia-smi.
  • Context window: Larger context windows consume more VRAM. If you are running out of memory, reduce the context size in Ollama's modelfile or use a smaller model.
  • Quantisation: If the full model does not fit in your VRAM, use a quantised version (e.g., gemma4:27b-q4_K_M). Quality drops slightly but VRAM usage decreases significantly.
  • Keep-alive: By default, Ollama unloads models after 5 minutes of inactivity. Set OLLAMA_KEEP_ALIVE=-1 to keep models loaded permanently during your coding session.

Practical Workflows

Here are the workflows where I have found OpenCode most useful in day-to-day development:

Code Generation

Describe what you need in plain English and OpenCode generates it. For example: "Create a REST API endpoint in Express.js for user registration. Include input validation with Zod, password hashing with bcrypt, and return appropriate HTTP status codes." OpenCode writes the complete implementation, including imports, error handling, and response formatting. With Claude Sonnet 4 or GPT-4o as the backend model, the quality is comparable to what you would get from Cursor or Copilot Chat.

Refactoring

Reference an existing file and describe the refactoring: Refactor @src/api/handlers.ts — extract the validation logic into a separate middleware function and add TypeScript types for all parameters. OpenCode reads the file, generates the refactored version, and shows you a diff. It handles multi-file refactors as well — if extracting the middleware requires creating a new file, it proposes both the new file and the modifications to the original.

Test Writing

Ask OpenCode to generate tests for existing code: Write comprehensive Jest tests for @src/utils/dateFormatter.ts — cover edge cases including invalid dates, timezone boundaries, and locale formatting. The quality of generated tests depends heavily on the backend model. Claude Sonnet 4 and GPT-4o produce thorough test suites with good edge case coverage. Local models like Gemma 4 27B cover the main paths but tend to miss subtle edge cases — use them as a starting point and add additional assertions manually.

Debugging

Paste an error message or stack trace directly into OpenCode and ask for help: "I am getting this error when running my tests: [paste error]. The relevant code is in @src/services/payment.ts. What is causing this and how do I fix it?" OpenCode analyses the error, examines the referenced file, and explains both the root cause and the fix. For common errors (type mismatches, null references, async/await issues), it gets it right on the first attempt most of the time.

Project Exploration

OpenCode is excellent for understanding unfamiliar codebases. Navigate to a project you have never worked on and ask: "Explain the architecture of this project. What are the main modules, how do they interact, and where is the entry point?" OpenCode scans the directory structure, reads key files, and provides a high-level architectural overview. Follow up with specific questions like "how does the authentication flow work?" to drill deeper into individual systems.

OpenCode vs Claude Code vs Cursor vs GitHub Copilot

This is the comparison most developers want. Here is an honest side-by-side assessment:

Feature OpenCode Claude Code Cursor GitHub Copilot
Price Free (MIT licence) $20+/month (Anthropic plan) $20/month (Pro) $19/month (Individual)
Model flexibility 75+ models, any provider Claude models only Claude, GPT, hosted models GitHub-hosted models
Local/offline models Yes (Ollama) No No No
Interface Terminal-based Terminal-based Full IDE (VS Code fork) VS Code extension
Inline autocomplete No No Yes Yes
Chat / code generation Yes Yes Yes Yes (Copilot Chat)
Multi-file editing Yes (with diff review) Yes Yes (Composer) Limited
Shell command execution Yes Yes Limited No
LSP integration Yes No Yes (native IDE) Yes (via VS Code)
Privacy (fully local option) Yes No No No
Open source Yes (MIT) No No No
Project context depth Good (file refs + LSP) Excellent Excellent (full codebase) Good (workspace indexing)

The takeaway: OpenCode offers the most flexibility and the best price (free), but it trades away the polished GUI experience and inline autocomplete that Cursor and Copilot provide. If you are comfortable working in the terminal and value model choice and privacy, OpenCode is the strongest option. If inline tab-completions are essential to your workflow, you will still need Copilot or Cursor alongside it.

Limitations

OpenCode is impressive for its age and community momentum, but it is important to understand where it falls short today:

  • Terminal only. There is no graphical interface, no sidebar panel, no inline code decorations. Everything happens in the terminal. Some developers love this; others find it disorienting compared to the integrated experience of Cursor or Copilot.
  • No inline autocomplete. OpenCode does not provide tab-completion suggestions as you type. It is a conversational tool — you ask for code and it generates it. If ghost-text autocomplete is central to your workflow, you will need a separate tool for that.
  • Output quality depends on the model. OpenCode itself is just an interface. The quality of code it generates, the accuracy of its explanations, and the usefulness of its refactoring suggestions all depend on which backend model you configure. A local 7B model produces noticeably worse output than Claude Sonnet 4 or GPT-4o.
  • Early and rapidly evolving project. Despite the 126K+ stars, OpenCode is still a young project. Breaking changes between versions are possible, documentation has gaps, and some features are experimental. Check the GitHub issues and changelog before updating to a new version.
  • Token costs with cloud providers. While OpenCode itself is free, using it with OpenAI, Anthropic, or Google models incurs API usage costs. Heavy usage (large file contexts, long conversations) can accumulate meaningful bills. Monitor your API dashboard and use the /compact command to reduce token consumption during long sessions.

Frequently Asked Questions

Is OpenCode truly free to use?

OpenCode itself is completely free and open source under the MIT licence. You can download, use, modify, and distribute it without any cost. However, if you connect it to cloud model providers (OpenAI, Anthropic, Google), you pay those providers' standard API rates for the tokens you consume. To use OpenCode at zero cost, pair it with Ollama and a local model — everything runs on your hardware with no external API calls.

Can OpenCode replace Cursor or GitHub Copilot entirely?

For chat-based coding workflows (code generation, refactoring, test writing, debugging, code explanation), OpenCode is a capable replacement, especially with a strong backend model like Claude Sonnet 4 or GPT-4o. Where it cannot replace Cursor or Copilot is inline autocomplete — the ghost-text suggestions that appear as you type. If tab-completion is a significant part of your workflow, you will likely want to keep Copilot or Cursor for that and use OpenCode for conversational AI coding tasks.

Does OpenCode work on Windows?

Yes. OpenCode provides pre-built binaries for Windows, macOS, and Linux. On Windows, it works in PowerShell, Command Prompt, Windows Terminal, and the VS Code integrated terminal. The experience is identical across platforms. If you use Ollama for local models on Windows, make sure the Ollama service is running before launching OpenCode.

Sources and Further Reading

Related Posts

Want to use AI tools more effectively?

My courses cover practical AI workflows, from spreadsheet formulas to app development, with real projects and honest tool comparisons.

Browse all courses