Feature: State Streaming

Feature: State Streaming

CopilotKit keeps the CoAgent state in sync with the UI. Let’s imagine the example LangGraph node starts with an initial state of {"status": "pending"}.

Your frontend based on CopilotKit would get this state synced across as part of the agents state. Once the agent is done with example_node and return the updated state, it will be synced again, resulting in {"status": "done"} on both the agent and the frontend state.

def example_node(state: AgentState, config: RunnableConfig):
	# work happens here ...
	state["status"] = "done"
	return state

Streaming messages to the chat window

What if we want to emit LLM messages to the user? We can configure CopilotKit so that these messages are treated as user-facing messages, so that they will appear in the chat window.

from copilotkit.langchain import configure_copilotkit
 
async def talk_to_user_node(state: AgentState, config: RunnableConfig):
	"""
	The messages of this node will appear in the chat window
	"""
 
	# configure CopilotKit to stream the messages to the chat window
	config = configure_copilotkit(
		config,
		emit_messages=True
	)
 
	response = await ChatOpenAI(model="gpt-4o").ainvoke(
		[
			*state["messages"],
			SystemMessage(content="Say hi to the user")
		],
		config # <-- pass the configuration to LangChain
	)
 
	# return the new messages to make them part of the conversation history
	# in LangGraph
	return {
		"messages": [response]
	}

Note: If you want to get an answer from the user, you need to interrupt the agent. You can do this by either going to the __END__ node in LangGraph or by using LangGraph interrupts.

Streaming state from LLM tool calls

Often, you want to update your state directly from LLM tool calls. Wouldn’t it be nice if there was a way to stream these state updates directly while they are being generated by the LLM?

You can configure CopilotKit to treat tool calls as state updates with the emit_intermediate_state setting like this:

from copilotkit.langchain import configure_copilotkit
 
async def streaming_state_node(state: AgentState, config: RunnableConfig):
	config = configure_copilotkit(
	    config,
 
	    # this will stream tool calls *as if* they were state
	    emit_intermediate_state=[
		    {
			    "state_key": "outline",
			    "tool": "set_outline",
			    "tool_argument": "outline",
		    }
	    ]
	)
 
	# call your LLM here ...
 
	return {
		*state,
		"outline": outline # <- you still need to return your final state
	}

With emit_intermediate_state you can specify that a specific key in your state should be set from a tool call. In this example we set the key “outline” (via state_key ) to be streamed from the tool ”set_outline”. The argument optionally specifies which argument to use, in this case the tool_argument is “outline”.

Calling frontend actions

CopilotKit provides a type for your state called CopilotKitState that you can use and extend. It contains an array of actions that you can pass directly to LangChain as tools. These actions will contain all actions available in your CopilotKit setup, including the actions defined on the frontend.

You can make calls to the frontend by 1) configuring CopilotKit to emit tool calls and 2) providing the actions to the LLM. After that, you need to interrupt the execution of the LangGraph agent so that the tool can be executed and the agent can continue with the result from the action.

from copilotkit.langchain import configure_copilotkit
 
def frontend_actions_node(state: AgentState, config: RunnableConfig):
	config = configure_copilotkit(
	    config,
 
	    # 1) configure CopilotKit to emit tool calls
	    emit_tool_calls=True,
	)
 
  # 2) provide the actions to the LLM.
	model = ChatOpenAI(model="gpt-4o").bind(state.copilotkit.actions)
 
	response = await.ainvoke(...)
 
	return {
    "messages": [response]
  }

Want to Run CoAgents in Production?

We offer tailored solutions for Enterprise customers. We'd be happy to support you with custom use cases, deploying and scaling CoAgents in production.

🪁


© Tawkit, Inc. All rights reserved.