Self Hosting (Copilot Runtime)
Learn how to self-host the Copilot Runtime.
The Copilot Runtime is the back-end component of CopilotKit, handling the communication with LLM, message history, state and more.
You may choose to self-host the Copilot Runtime, or use Copilot Cloud (recommended).
Integration
Step 1: Create an Endpoint
Add your OpenAI API key to your .env
file in the root of your project:
Please note that the code below uses GPT-4o, which requires a paid OpenAI API key. If you are using a free OpenAI API key, change the model to a different option such as gpt-3.5-turbo
.
Endpoint Setup
Create a new route to handle the /api/copilotkit
endpoint.
Your Copilot Runtime endpoint should be available at http://localhost:3000/api/copilotkit
.
Step 2: Configure the <CopilotKit>
Provider
The <CopilotKit>
provider must wrap the Copilot-aware parts of your application.
For most use-cases, it's appropriate to wrap the CopilotKit
provider around the entire app, e.g. in your layout.tsx
Point it at the Copilot Runtime URL you configured in the previous step.
LLM Adapters
LLM Adapters are responsible for executing the request with the LLM and standardizing the request/response format in a way that the Copilot Runtime can understand.
Currently, we support the following LLM adapters natively:
- OpenAI Adapter
- OpenAI Assistant Adapter
- LangChain Adapter
- Groq Adapter
- Google Generative AI Adapter
- Anthropic Adapter
You can use the LangChain Adapter to use any LLM provider we don't yet natively support!
It's not too hard to write your own LLM adapter from scratch -- see the existing adapters for inspiration. And of course, we would love a contribution! ⭐️