CopilotRuntime
Handles requests from frontend, provides function calling and various LLM backends.
This class is the main entry point for the runtime. It handles requests from the frontend, provides function calling and various LLM backends.
Currently, you can use OpenAIAdapter
for direct access to the OpenAI API and LangChainAdapter
to use the LangChain API as a backend.
OpenAIAdapter
Simply pass an instance of OpenAIAdapter
to the response
method of CopilotRuntime
to use OpenAI as a backend.
const copilotKit = new CopilotRuntime();
return copilotKit.response(req, new OpenAIAdapter());
OpenAIAssistantAdapter
To use the assistant API as backend, use OpenAIAssistantAdapter
:
const copilotKit = new CopilotRuntime();
return copilotKit.response(
req,
new OpenAIAssistantAdapter({ assistantId: "your-assistant-id" })
);
The assistant adapter supports the following configuration options:
assistantId
(required): The ID of the assistant to use.openai
(optional): An instance ofOpenAI
to use for the request. If not provided, a new instance will be created.codeInterpreterEnabled
(optional): Whether to enable the code interpreter. Defaults totrue
.retrievalEnabled
(optional): Whether to enable the retrieval. Defaults totrue
.
GoogleGenerativeAIAdapter
To use Google Gemini as backend, use GoogleGenerativeAIAdapter
:
const copilotKit = new CopilotRuntime();
const genAI = new GoogleGenerativeAI(process.env["GOOGLE_API_KEY"]!);
const model = genAI.getGenerativeModel({ model: "gemini-pro" });
return copilotKit.response(req, new GoogleGenerativeAIAdapter({ model }));
LangChainAdapter
To use LangChain as a backend, provide a handler function to the adapter with your custom LangChain logic. The async handler function can return:
- a simple
string
response - a LangChain stream
IterableReadableStream
- a LangChain
BaseMessageChunk
object - a LangChain
AIMessage
object
This example streams back OpenAI messages via LangChain:
return copilotKit.response(
req,
new LangChainAdapter(async (forwardedProps) => {
const model = new ChatOpenAI({ modelName: "gpt-4-1106-preview" });
return model.stream(forwardedProps.messages, {
tools: forwardedProps.tools,
});
})
);
Server Side Actions
CopilotKit supports actions that can be executed on the server side. You can define server side actions by passing the actions
parameter:
const copilotKit = new CopilotRuntime({
actions: [
{
name: "sayHello",
description: "Says hello to someone.",
argumentAnnotations: [
{
name: "arg",
type: "string",
description: "The name of the person to say hello to.",
required: true,
},
],
implementation: async (arg) => {
console.log("Hello from the server", arg, "!");
},
},
],
});
Server side actions can also return a result which becomes part of the message history.
This is useful because it gives the LLM context about what happened on the server side. In addition, it can be used to look up information from a vector or relational database and other sources.
In addition to that, server side actions can also come from LangChain, including support for streaming responses.
Returned results can be of the following type:
- anything serializable to JSON
string
- LangChain types:
IterableReadableStream
BaseMessageChunk
AIMessage
LangServe
The backend also supports LangServe, enabling you to connect to existing chains, for example python based chains.
Use the langserve
parameter to specify URLs for LangServe.
const copilotKit = new CopilotRuntime({
langserve: [
{
chainUrl: "http://my-langserve.chain",
name: "performResearch",
description: "Performs research on a given topic.",
},
],
});
When left out, arguments are automatically inferred from the schema provided by LangServe.
Constructor
A list of server side actions that can be executed.
An array of LangServer URLs.
response(req: Request, serviceAdapter: CopilotKitServiceAdapter)
Returns a Response
object for streaming back the result to the client
The HTTP request
The adapter to use for the response.
streamHttpServerResponse(req: any, res: any, serviceAdapter: CopilotKitServiceAdapter, headers?: Record<string, string>)
Streams messages back to the client using the HTTP response object. Use with express,
The HTTP request
The HTTP response
The adapter to use for the response.
Additional headers to send with the response.