How to Use MCP With ChatGPT and Your Own Tools

Coding Liquids blog cover featuring Sagnik Bhattacharya for using MCP with ChatGPT and custom tools.
Coding Liquids blog cover featuring Sagnik Bhattacharya for using MCP with ChatGPT and custom tools.

The Model Context Protocol (MCP) gives language models a standard way to call external tools — file search, database queries, API calls — without custom glue code for every provider.

ChatGPT now supports MCP connections, which means you can wire it to your own tools instead of relying only on built-in plugins. This guide walks through the setup from scratch, with working examples you can adapt to your own stack.

This is the starter post in the MCP series — begin here. Next: MCP servers for AI agents, then remote MCP servers.

Note: MCP support in ChatGPT may still be evolving. Check OpenAI's latest documentation for the most current connection details.

I teach Flutter and Excel with AI — explore my courses if you want structured learning.

Quick answer

Install an MCP server that exposes your tools, point ChatGPT at it, and the model can call those tools during a conversation. The protocol handles discovery, parameter passing, and result formatting.

  • You want ChatGPT to interact with your own databases, APIs, or file systems.
  • You need a standard protocol instead of building custom integrations per model.
  • You are comparing MCP support across different AI providers.
Follow me on Instagram@sagnikteaches

What MCP actually does

MCP is a protocol that lets an AI model discover what tools are available, understand their parameters, call them, and use the results in its response. Think of it as a USB-C port for AI tools — one standard interface instead of a different cable for every device.

The protocol defines three things: a way for the server to list available tools, a schema for each tool's inputs and outputs, and a message format for the model to invoke tools and receive results.

Connect on LinkedInSagnik Bhattacharya

Setting up an MCP server

An MCP server is a lightweight process that exposes your tools over a standard interface. You can write one in Python, TypeScript, or any language that can handle JSON-RPC.

The server registers each tool with a name, description, and parameter schema. When ChatGPT wants to use a tool, it sends a request matching that schema, and the server returns the result.

  • Install the MCP SDK for your language (Python: `pip install mcp`, TypeScript: `npm install @modelcontextprotocol/sdk`)
  • Define your tools with clear names and parameter schemas
  • Start the server process — it listens for connections from the AI client
  • Point ChatGPT at the server URL or use the local transport
Subscribe on YouTube@codingliquids

Connecting ChatGPT to your MCP server

Once your server is running, configure ChatGPT to connect to it. The exact steps depend on whether you are using the ChatGPT desktop app, API, or a wrapper.

For local development, the stdio transport is simplest — ChatGPT spawns your server as a subprocess and communicates through standard input and output. For production, the SSE or streamable HTTP transport lets you host the server remotely.

Writing your first tool

Start with something simple: a tool that searches files in a directory, queries a database table, or calls a single API endpoint. Keep the parameter schema minimal — one or two required fields.

Test the tool independently before connecting it to ChatGPT. If the tool works on its own, debugging the MCP layer is much easier.

# Example: simple file search tool
@server.tool()
async def search_files(query: str, directory: str = ".") -> str:
    """Search for files matching a query in a directory."""
    import glob
    matches = glob.glob(f"{directory}/**/*{query}*", recursive=True)
    return "\n".join(matches[:20]) or "No files found."

Handling errors and edge cases

Tools fail. APIs time out, files do not exist, queries return empty results. Your MCP server should return clear error messages instead of crashing silently.

ChatGPT handles tool errors reasonably well — it will tell the user what went wrong and may try a different approach. But only if your error messages are descriptive enough to be useful.

Security considerations

An MCP server gives an AI model access to real systems. Think carefully about what you expose.

Limit tool permissions to read-only where possible. Validate all inputs. Run the server with minimal privileges. If you are exposing the server remotely, use authentication and HTTPS.

Worked example: ChatGPT queries a project database

You build an MCP server with two tools: one searches a project database by name or status, another returns task details by ID. After connecting ChatGPT, you can ask questions like 'What tasks are overdue in the backend project?' and ChatGPT calls the right tools, combines the results, and gives you a summary.

Common mistakes

  • Exposing write operations without authentication.
  • Building tools with vague descriptions that confuse the model.
  • Skipping local testing and debugging the protocol layer instead of the tool logic.

When to use something else

If you need MCP with Claude Code in VS Code, the setup is similar but uses Anthropic's client. For AI-assisted app building, see Create with AI in Flutter.

How to apply this in a real AI project

How to Use MCP With ChatGPT and Your Own Tools becomes much more useful once it is tied to the rest of the workflow around it. In real work, the result depends on model selection, prompt design, tool integration, evaluation, and the operational reality of shipping AI features, not only on following one local tip correctly.

That is why the biggest win rarely comes from one clever move in isolation. It comes from making the surrounding process easier to review, easier to repeat, and easier to hand over when another person inherits the workbook or codebase later.

  • Test with realistic inputs before shipping, not just the examples that inspired the idea.
  • Keep the human review step visible so the workflow stays trustworthy as it scales.
  • Measure what matters for your use case instead of relying on general benchmarks.

How to extend the workflow after this guide

Once the core technique works, the next leverage usually comes from standardising it. That might mean naming inputs more clearly, keeping one review checklist, or pairing this page with neighbouring guides so the process becomes repeatable rather than person-dependent.

The follow-on guides below are the most natural next steps from How to Use MCP With ChatGPT and Your Own Tools. They help move the reader from one useful page into a stronger connected system.

Related guides on this site

These guides cover related MCP and tool-calling patterns you can build on.

Want to use AI tools more effectively?

My courses cover practical AI workflows, from spreadsheet automation to app development, with real projects and honest tool comparisons.

Browse AI courses