Getting Started
LangGraph agents are written in Python, using the LangGraph Python SDK and an API runtime server such as FastAPI. This guide presumes you're familiar with LangGraph and FastAPI, but if you're new to either, you can find a detailed setup walkthrough for our bootstrap project on our GitHub repo.
To integrate LangGraph-based agents into your CopilotKit application, you'll need the copilotkit
python package:
The CopilotKit integration is a simple endpoint that you can easily add to your FastAPI application, powered by any LangGraph agent:
Self-hosting CopilotKit in your app
Step 1: Set up Copilot Runtime Endpoint
For now, to use CoAgents, you'll need to self-host the Copilot Runtime. Soon CoAgents will be available through Copilot Cloud.
Add your OpenAI API key to your .env
file in the root of your project:
Please note that the code below uses GPT-4o, which requires a paid OpenAI API key. If you are using a free OpenAI API key, change the model to a different option such as gpt-3.5-turbo
.
Endpoint Setup
Create a new route to handle the /api/copilotkit
endpoint.
Your Copilot Runtime endpoint should be available at http://localhost:3000/api/copilotkit
.
Step 2: point remote actions at your FastAPI endpoint created via the Copilot Python Backend SDK:
Integrating agents into your CopilotKit frontend app
By default, CoAgents are like skills you provide to the CopilotKit SDK. CopilotKit will take care of starting a specific agent when it detects that this agent is needed — all you need to do is to provide your agents to CopilotKit on the Python side.
Sometimes, you'll want a specific agent to run even before your users take action, ensuring that the user interacts with that agent directly. For these scenarios, you can "lock" a specific agent in your CopilotKit frontend by passing in the agent
prop:
For more information on agent-lock mode, and router mode, see Router Mode and Agent Lock.
You might also want to start your agent programmatically. For this, you can use the start
/stop
function returned by the useCoAgent
hook:
Streaming state updates
CopilotKit will automatically sync state between your LangGraph agent and the frontend when entering or exiting a node.
For more detailed user feedback or observability, you can also configure CopilotKit to stream messages, LLM state updates and tool calls from your LangGraph agent by setting some options in our Python SDK.
Read more about state streaming
Working with agent state from the frontend
CopilotKit provides React hooks for accessing your agent's state in the frontend, allowing you to get live updates in response to state changes, as well as to update the shared state from anywhere in your application.
We provide two hooks to access the shared agent state:
useCoAgent
to access the state of the agent from anywhere in your application.
useCoAgentAction
to access the state and render it to the chat window.
Read more about sharing agent state
Agent Q&A
One of the key ways agents differ from normal chatbots is the ability to interrupt execution and ask for user feedback, using standard LangGraph mechanisms (interrupts, __END__
node, etc.).
Most LangGraph agents should work out-of-the-box with CopilotKit, requiring only minimal configuration.
CoAgents receive the whole message history as context, even in nodes that are not directly exposed to the user, so they can take the user's feedback into account and provide precisely tailored actions and responses.