Logo
Docs

Self Hosting (Copilot Runtime)

Learn how to self-host the Copilot Runtime.

The Copilot Runtime is the back-end component of CopilotKit, handling the communication with LLM, message history, state and more.

You may choose to self-host the Copilot Runtime, or use Copilot Cloud (recommended).

Integration

Step 1: Create an Endpoint

Add your OpenAI API key to your .env file in the root of your project:

.env
OPENAI_API_KEY=your_api_key_here

Please note that the code below uses GPT-4o, which requires a paid OpenAI API key. If you are using a free OpenAI API key, change the model to a different option such as gpt-3.5-turbo.

Endpoint Setup

Create a new route to handle the /api/copilotkit endpoint.

app/api/copilotkit/route.ts
import {
  CopilotRuntime,
  OpenAIAdapter,
  copilotRuntimeNextJSAppRouterEndpoint,
} from '@copilotkit/runtime';
import OpenAI from 'openai';
import { NextRequest } from 'next/server';
 
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const serviceAdapter = new OpenAIAdapter({ openai });
const runtime = new CopilotRuntime();
 
export const POST = async (req: NextRequest) => {
  const { handleRequest } = copilotRuntimeNextJSAppRouterEndpoint({
    runtime,
    serviceAdapter,
    endpoint: '/api/copilotkit',
  });
 
  return handleRequest(req);
};

Your Copilot Runtime endpoint should be available at http://localhost:3000/api/copilotkit.

Step 2: Configure the <CopilotKit> Provider

The <CopilotKit> provider must wrap the Copilot-aware parts of your application. For most use-cases, it's appropriate to wrap the CopilotKit provider around the entire app, e.g. in your layout.tsx

Point it at the Copilot Runtime URL you configured in the previous step.

layout.tsx
import { CopilotKit } from "@copilotkit/react-core"; 
 
export default function RootLayout({children}) {
  return (
    {/* Make sure to use the URL you configured in the previous step */}
    <CopilotKit runtimeUrl="/api/copilotkit">
      {children}
    </CopilotKit> 
  );
}

LLM Adapters

LLM Adapters are responsible for executing the request with the LLM and standardizing the request/response format in a way that the Copilot Runtime can understand.

Currently, we support the following LLM adapters natively:

You can use the LangChain Adapter to use any LLM provider we don't yet natively support!

It's not too hard to write your own LLM adapter from scratch -- see the existing adapters for inspiration. And of course, we would love a contribution! ⭐️

Next Steps

On this page

Edit on GitHub