0%

LangGraph的介绍和使用

LangGraph 是一个使用 LLM 构建有状态、多参与者应用程序的库,它构建在 LangChain 之上,并旨在与 LangChain 一起使用。它扩展了 LangChain,能够以循环方式跨多个计算步骤协调多个链或参与者。今天介绍下它的使用。

LangGraph 主要用途是为 LLM 应用添加循环的功能。循环对于类似代理的行为非常重要,您可以在循环中调用 LLM,询问它下一步要采取什么操作。需要注意的是,LangGraph 并未仅针对 DAG (有向无环图)工作流程进行优化。如果你想构建一个DAG,你应该使用LangChain。如果你想构建一个有向有环图,那么你可以考虑使用 LangGraph。

LangGraph 的核心概念之一是状态。每个图执行都会创建一个状态,该状态在执行时在图中的节点之间传递,并且每个节点在执行后用其返回值更新此内部状态。图形更新其内部状态的方式由所选图形的类型或自定义函数定义。

LangGraph的介绍

LangGraph 主要是通过引入创建循环图的方法从而更好的创建 agent 应用。当我们创建更复杂的 LLM 应用程序时,我们看到的常见模式之一是将循环引入运行时,并使用 LLM 来推理循环中下一步该做什么。通常在循环中运行 LLM 有助于创建更灵活的应用程序,从而可以完成可能未预定义的更模糊的用例。

在 agent 这类 LLM 应用中,循环通常分为两个步骤:

1
2
1. 调用 LLM 以确定 (a) 采取什么行动,或 (b) 向用户提供什么响应
2. 采取给定的操作,然后返回到步骤 1

重复这些步骤直到生成最终响应。这本质上是为 AgentExecutor 提供动力的循环。


当我们创建 agent 开发时,我们可能希望始终强制代理首先调用特定工具,可能希望更好地控制工具的调用方式,也可能希望代理有不同的提示,这具体取决于代理所处的状态。当我们谈论这些更受控制的流程时,我们在内部将它们称为“状态机”,LangGraph 中就提供了这种 “状态机”。这些状态机具有能够循环的能力 - 允许处理比简单链更模糊的输入。


在 LangGraph 中有以下几个概念先介绍一下。

StateGraph

StateGraph 是表示图的类。我们可以通过传入状态定义来初始化此类。该状态定义表示随时间更新的状态对象。该状态由图中的节点更新,这些节点将操作返回到该状态的属性中(以键值存储的形式)。

该状态的属性可以通过两种方式更新。首先,属性可以被完全覆盖。如果您希望节点返回属性的新值,这非常有用。其次,可以通过添加属性值来更新属性。如果属性是所采取的操作(或类似内容)的列表,并且我们希望节点返回所采取的新操作(并将这些操作自动添加到属性中),则这非常有用。

我们可以像下面这样定义一个状态对象:

1
2
3
4
5
6
7
8
9
10
11
from langgraph.graph import StateGraph
from typing import TypedDict, List, Annotated
import Operator


class State(TypedDict):
input: str
actions: Annotated[List[str], operator.add]


graph = StateGraph(State)

Nodes

创建 StateGraph 后,我们可以使用 graph.add_node(name, value) 语法添加节点。 name 参数是一个字符串,表示节点名称,我们将在添加边时使用它来引用节点。value 参数是一个将被调用的函数或 LangChain runnable 对象。此函数/runnable 应接受与 State 对象相同形式的字典作为输入,并输出包含要更新的 State 对象的键的字典。

Edges

添加节点后,我们可以添加边来创建图形。边有几种类型。

Starting Edge

这是将图的起点连接到特定节点的边。这将使该节点成为输入传递到图形时第一个调用的节点。

Normal Edges

在这些边上,一个节点应该始终在另一个节点之后被调用。这方面的一个例子是在基本代理运行时中,我们总是希望在调用工具之后调用模型。

Conditional Edges

这是使用函数(通常由 LLM 提供支持)来确定首先转到哪个节点的边。要创建此边,我们需要传递三件事:

1
2
3
1. 上游节点:将查看该节点的输出以确定下一步做什么
2. 一个函数:将调用该函数来确定接下来要调用哪个节点。它应该返回一个字符串
3. 一个映射:该映射将用于将(2)中函数的输出映射到另一个节点。键是 (2) 中的函数可以返回的可能值。如果返回该值,这些值应该是要转到的节点的名称。

一个例子是,在调用模型后,我们要么退出 graph 并返回结果给用户,要么调用一个工具 - 取决于用户的决定!

Compile

定义 graph 后,我们可以将其编译成 runnable 对象。这只是采用我们到目前为止创建的图形定义并返回一个 runnable 对象。该 runnable 对象暴露了与 LangChain 可运行对象(.invoke、.stream、.astream_log 等)相同的方法,允许以与链相同的方式调用它。

LangGraph的使用

接下来我们看看如何使用 LangGraph 开发 agent 程序。

实现简单的 chatbot agent

先看一个简单的例子,即使用 LangGraph 构建一个基本的聊天机器人。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
from typing import Annotated

from typing_extensions import TypedDict

from langgraph.graph import StateGraph
from langgraph.graph.message import add_messages


# 定义一个字典类型,有一个 messages 键,这个键的值类型使用 Annotated 来描述,值类型为 list,并使用 add_messages 方法将信息追加到 list 中
# Annotated 的用法:Annotated[<type>, <metadata1>, <metadata2>, ...]
class State(TypedDict):
messages: Annotated[list, add_messages]


# 创建 StateGraph 对象,传入一个表示状态的属性
graph_builder = StateGraph(State)

上面的代码做了以下工作:

1
2
1. 我们定义的每个节点都将接收当前状态作为输入并返回更新该状态的值。
2. 消息将追加到当前列表中,而不是直接覆盖。这是通过带注释的语法中预构建的 add_messages 函数进行通信的。

接下来在 graph 上添加一个 chatbot 节点:

1
2
3
4
5
6
7
8
9
10
11
12
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(temperature=0)


def chatbot(state: State):
# 返回 State 对象中的消息信息
return {"messages": [llm.invoke(state["messages"])]}


# 添加 chatbot 节点
graph_builder.add_node("chatbot", chatbot)

接下来设置开始节点和结束节点:

1
2
graph_builder.set_entry_point("chatbot")
graph_builder.set_finish_point("chatbot")

最后运行 graph:

1
2
3
4
5
6
7
8
9
10
graph = graph_builder.compile()

while True:
user_input = input("User: ")
if user_input.lower() in ["quit", "exit", "q"]:
print("Goodbye!")
break
for event in graph.stream({"messages": ("user", user_input)}):
for value in event.values():
print("Assistant:", value["messages"][-1].content)

以上就是一个简单的 LangGraph 使用例子,上面的例子开发步骤大致是:创建 StateGraph 对象 –> 创建节点 node –> 设置开始节点 –> 运行 graph。


向 agent 中添加 tool

接下来我们看一个更加复杂的例子,该例子中演示了如何使用 LangGraph 多个节点和多条边来开发一个循环的 agent。

首先,我们创建一个工具:

1
2
3
4
from langchain_community.tools.tavily_search import TavilySearchResults

tool = TavilySearchResults(max_results=3)
tools = [tool]

然后我们定义 LLM,并将工具绑定到 LLM 上:

1
2
3
4
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(temperature=0, streaming=True)
llm_with_tools = llm.bind_tools(tools)

接下来我们定义一个 StateGraph 对象和一个 chatbot 节点:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
from typing import Annotated

from typing_extensions import TypedDict

from langgraph.graph import StateGraph
from langgraph.graph.message import add_messages


class State(TypedDict):
messages: Annotated[list, add_messages]

# 创建 StateGraph 对象
graph_builder = StateGraph(State)


def chatbot(state: State):
"""定义 chatbot 节点如何处理 State 中的数据,即 chatbot 节点负责获取 state 中的数据并传给 llm 获取结果
"""
return {"messages": [llm_with_tools.invoke(state["messages"])]}


# 添加 chatbot 节点,指定节点名称和需要调用的 tool 或者 runnable 对象
graph_builder.add_node("chatbot", chatbot)

接下来创建 action 节点,action 节点就负责处理特定的任务:

1
2
3
4
from langgraph.prebuilt import ToolNode

tool_node = ToolNode(tools=tools)
graph_builder.add_node("action", tool_node)

接下来我们添加一个条件边:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
def should_continue(state):
"""在这个函数中可以编写节点的路由逻辑
"""
messages = state['messages']
last_message = messages[-1]
if "tool_calls" not in last_message.additional_kwargs:
# 如果 additional_kwargs 参数中不存在 tool_calls 的 key,则下一步直接走到 end 节点
return "__end__"
else:
# 如果 additional_kwargs 中存在 tool_calls 的 key,则继续往下走
return "continue"


graph_builder.add_conditional_edges(
"chatbot", # 指定条件边的上游节点
should_continue, # 指定一个函数,由这个函数决定接下来调用哪个节点,可以在这个函数中编写节点的路由逻辑
{"continue": "action", "__end__": "__end__"}, # 指定一个键值对的映射关系,键是 should_continue 返回的数据,值是对应一个节点名
)

接下来创建其他的元素:

1
2
3
4
5
6
# 添加一个普通边
graph_builder.add_edge("action", "chatbot")
# 添加入口节点
graph_builder.set_entry_point("chatbot")

graph = graph_builder.compile()

最后运行 graph:

1
2
3
4
5
6
7
8
while True:
user_input = input("User: ")
if user_input.lower() in ["quit", "exit", "q"]:
print("Goodbye!")
break
for event in graph.stream({"messages": [("user", user_input)]}):
for value in event.values():
print(value)

我们可以使用 get_graph 方法和 draw 方法之一(例如draw_ascii 或draw_png)可视化图形:

1
2
3
4
5
6
7
from IPython.display import Image, display

try:
display(Image(graph.get_graph().draw_mermaid_png()))
except:
# This requires some extra dependencies and is optional
pass

可视化后的调用流程图是这样的:

img

向 agent 中添加记忆功能

LangGraph 通过持久检查点解决了对话没有上下文的问题。如果我们在编译图表时提供检查点并在调用图表时提供 thread_id,则 LangGraph 会在每个步骤后自动保存状态。当我们使用相同的 thread_id 再次调用该图时,该图会加载其保存的状态,从而允许聊天机器人从中断处继续处理。

检查点比简单的聊天内存强大得多 - 它可以让您随时保存和恢复复杂的状态以进行错误恢复,人机交互工作流程、时间旅行交互等等。

我们先定义一个检查点 checkpoint:

1
2
3
from langgraph.checkpoint.sqlite import SqliteSaver

memory = SqliteSaver.from_conn_string(":memory:")

然后,我们在编译 graph 时,加上 checkpointer 这个参数:

1
graph = graph_builder.compile(checkpointer=memory)

最后,我们可以运行 graph:

1
2
3
4
5
6
7
8
9
10
11
config = {"configurable": {"thread_id": "1"}}
while True:
user_input = input("User: ")
if user_input.lower() in ["quit", "exit", "q"]:
print("Goodbye!")
break

# 运行的时候,在 stream() 中添加配置参数 {"configurable": {"thread_id": "1"}}
for event in graph.stream({"messages": [("user", user_input)]}, config):
for value in event.values():
print(value)

上面打印的输出是:

1
2
3
4
5
6
7
8
9
10
User: 你好
{'messages': [AIMessage(content='你好!有什么可以帮助你的吗?', response_metadata={'finish_reason': 'stop'}, id='run-7512f818-1dbf-41d4-b0d4-c2ced5cd3eab-0')]}
User: 我叫wyzane
{'messages': [AIMessage(content='你好,wyzane!很高兴认识你。有什么问题或者需要帮忙的吗?', response_metadata={'finish_reason': 'stop'}, id='run-9266cede-d4fc-4c38-bd9f-62fc8cf0de40-0')]}
User: 你是谁
{'messages': [AIMessage(content='我是一个人工智能助手,可以回答各种问题并提供帮助。有什么可以为你效劳的吗?', response_metadata={'finish_reason': 'stop'}, id='run-d536ece4-3176-4235-b2bf-8797a4ac6392-0')]}
User: 我叫什么
{'messages': [AIMessage(content='你告诉我你叫wyzane。你的名字是wyzane。有什么其他问题需要我帮忙解答吗?', response_metadata={'finish_reason': 'stop'}, id='run-6c14e618-b905-4358-a43c-5ad0cb1043e0-0')]}
User: q
Goodbye!

可以看出,上面的对话是有上下文信息的。如果我退出上面的 graph,再次运行时它时,若还是传入的 {“configurable”: {“thread_id”: “1”}},那么这个运行的 graph 还是会保留我之前的对话信息。如果传入的是一个新的 thread_id,那么就是一个新的 graph。

我们可以使用一下方式查看一个 graph 的状态信息:

1
2
snapshot = graph.get_state(config)
snapshot

上面的输出信息是:

1
StateSnapshot(values={'messages': [HumanMessage(content='你好', id='b94dfce7-04ce-4fcd-aad6-733072606d8c'), AIMessage(content='你好!有什么可以帮助你的吗?', response_metadata={'finish_reason': 'stop'}, id='run-7512f818-1dbf-41d4-b0d4-c2ced5cd3eab-0'), HumanMessage(content='我叫wyzane', id='45734d46-719d-435f-bf32-a6204b543185'), AIMessage(content='你好,wyzane!很高兴认识你。有什么问题或者需要帮忙的吗?', response_metadata={'finish_reason': 'stop'}, id='run-9266cede-d4fc-4c38-bd9f-62fc8cf0de40-0'), HumanMessage(content='你是谁', id='81fa5cfd-52e6-42aa-9792-e49309819c25'), AIMessage(content='我是一个人工智能助手,可以回答各种问题并提供帮助。有什么可以为你效劳的吗?', response_metadata={'finish_reason': 'stop'}, id='run-d536ece4-3176-4235-b2bf-8797a4ac6392-0'), HumanMessage(content='我叫什么', id='98b3c055-350e-40bd-9850-557eaa7489de'), AIMessage(content='你告诉我你叫wyzane。你的名字是wyzane。有什么其他问题需要我帮忙解答吗?', response_metadata={'finish_reason': 'stop'}, id='run-6c14e618-b905-4358-a43c-5ad0cb1043e0-0'), HumanMessage(content='你好', id='ff8a51d0-c876-4486-9d7a-0e1ab5a0189e'), AIMessage(content='你好!有什么可以帮助你的吗?如果有任何问题或需要帮助,请随时告诉我。', response_metadata={'finish_reason': 'stop'}, id='run-c93e1a11-5036-4625-90ad-8122749a3d33-0'), HumanMessage(content='我叫什么', id='3fbe105b-1162-4235-97ed-25ff6a8cd2a5'), AIMessage(content='你告诉我你叫wyzane。你的名字是wyzane。有什么其他问题需要我帮忙解答吗?', response_metadata={'finish_reason': 'stop'}, id='run-ea98054a-efbd-48eb-9871-5b2ab7c2c5ac-0')]}, next=(), config={'configurable': {'thread_id': '1', 'thread_ts': '2024-04-21T12:27:59.708249+00:00'}}, parent_config=None)

向 agent 中添加交互功能

我们在运行 agent 时,可能需要人工输入才能继续进行任务,以确保一切按我们的预期运行。LangGraph 通过多种方式支持人机交互工作流程。我们可以使用 LangGraph 的 Interrupt_before 功能来中断工具节点的执行。具体使用例子如下:

1
2
# 在编译 graph 的时候,指定 interrupt_before 参数,用于在执行某个 node 之前中断
graph = graph_builder.compile(checkpointer=memory, interrupt_before=["action"])

还是上面的 agent 例子:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
from typing import Annotated
from typing_extensions import TypedDict

from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode
from langgraph.checkpoint.sqlite import SqliteSaver


tool = TavilySearchResults(max_results=3)
tools = [tool]

llm = ChatOpenAI(temperature=0, streaming=True)
llm_with_tools = llm.bind_tools(tools)


class State(TypedDict):
messages: Annotated[list, add_messages]

# 创建 StateGraph 对象
graph_builder = StateGraph(State)


def chatbot(state: State):
"""定义 chatbot 节点如何处理 State 中的数据,即 chatbot 节点负责获取 state 中的数据并传给 llm 获取结果
"""
return {"messages": [llm_with_tools.invoke(state["messages"])]}


# 添加 chatbot 节点,指定节点名称和需要调用的 tool 或者 runnable 对象
graph_builder.add_node("chatbot", chatbot)

tool_node = ToolNode(tools=tools)
graph_builder.add_node("action", tool_node)


def should_continue(state):
"""在这个函数中可以编写节点的路由逻辑
"""
messages = state['messages']
last_message = messages[-1]
print(f"last message: {last_message}")
if "tool_calls" not in last_message.additional_kwargs:
# 如果 additional_kwargs 参数中不存在 tool_calls 的 key,则下一步直接走到 end 节点
return "__end__"
else:
# 如果 additional_kwargs 中存在 tool_calls 的 key,则继续往下走
return "continue"


graph_builder.add_conditional_edges(
"chatbot", # 指定条件边的上游节点
should_continue, # 指定一个函数,由这个函数决定接下来调用哪个节点,可以在这个函数中编写节点的路由逻辑
{"continue": "action", "__end__": "__end__"}, # 指定一个键值对的映射关系,键是 should_continue 返回的数据,值是对应一个节点名
)


memory = SqliteSaver.from_conn_string(":memory:")
config = {"configurable": {"thread_id": "2"}}

# 添加一个普通边
graph_builder.add_edge("action", "chatbot")
# 添加入口节点
graph_builder.set_entry_point("chatbot")

graph = graph_builder.compile(checkpointer=memory)

运行 graph:

1
2
3
4
5
6
7
user_input = "I'm learning LangGraph. Could you do some research on it for me?"
config = {"configurable": {"thread_id": "1"}}

events = graph.stream({"messages": [("user", user_input)]}, config)
for event in events:
for value in event.values():
print(f"value: {value}")

如果 graph 是通过 graph_builder.compile(checkpointer=memory) 创建,则上面的执行会一直运行,直到执行完成返回结果,像下面这样:

1
2
3
4
5
last message: content='' additional_kwargs={'tool_calls': [{'index': 0, 'id': 'call_xRK1M26mLAGIB0oKJvrivudr', 'function': {'arguments': '{"query":"LangGraph"}', 'name': 'tavily_search_results_json'}, 'type': 'function'}]} response_metadata={'finish_reason': 'tool_calls'} id='run-bdc06d2d-021f-4922-8de4-0a7713c10aef-0' tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'LangGraph'}, 'id': 'call_xRK1M26mLAGIB0oKJvrivudr'}]
value: {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'index': 0, 'id': 'call_xRK1M26mLAGIB0oKJvrivudr', 'function': {'arguments': '{"query":"LangGraph"}', 'name': 'tavily_search_results_json'}, 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls'}, id='run-bdc06d2d-021f-4922-8de4-0a7713c10aef-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'LangGraph'}, 'id': 'call_xRK1M26mLAGIB0oKJvrivudr'}])]}
value: {'messages': [ToolMessage(content='[{"url": "https://github.com/langchain-ai/langgraph/blob/main/README.md", "content": "Define the nodes\\nWe now need to define a few different nodes in our graph.\\\\nIn langgraph, a node can be either a function or a runnable.\\\\nThere are two main nodes we need for this:\\nWe will also need to define some edges.\\\\nSome of these edges may be conditional.\\\\nThe reason they are conditional is that based on the output of a node, one of several paths may be taken.\\\\nThe path that is taken is not known until that node is run (the LLM decides).\\n LangChain.\\\\nIt extends the LangChain Expression Language with the ability to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner.\\\\nIt is inspired by Pregel and Apache Beam.\\\\nThe current interface exposed is one inspired by NetworkX.\\nThe main use is for adding cycles to your LLM application.\\\\nCrucially, this is NOT a DAG framework.\\\\nIf you want to build a DAG, you should use just use LangChain Expression Language.\\n This is a special node representing the end of the graph.\\\\nThis means that anything passed to this node will be the final output of the graph.\\\\nIt can be used in two places:\\nWhen to Use\\nWhen should you use this versus LangChain Expression Language?\\n This method adds a node to the graph.\\\\nIt takes two arguments:\\n.add_edge\\nCreates an edge from one node to the next.\\\\nThis means that output of the first node will be passed to the next node.\\\\nIt takes two arguments.\\n Assuming you have done the above Quick Start, you can build off it like:\\nHere, we manually define the first tool call that we will make.\\\\nNotice that it does that same thing as agent would have done (adds the agent_outcome key).\\\\nThis is so that we can easily plug it in.\\n"}, {"url": "https://blog.langchain.dev/langgraph-multi-agent-workflows/", "content": "As a part of the launch, we highlighted two simple runtimes: one that is the equivalent of the AgentExecutor in langchain, and a second that was a version of that aimed at message passing and chat models.\\n It\'s important to note that these three examples are only a few of the possible examples we could highlight - there are almost assuredly other examples out there and we look forward to seeing what the community comes up with!\\n LangGraph: Multi-Agent Workflows\\nLinks\\nLast week we highlighted LangGraph - a new package (available in both Python and JS) to better enable creation of LLM workflows containing cycles, which are a critical component of most agent runtimes. \\"\\nAnother key difference between Autogen and LangGraph is that LangGraph is fully integrated into the LangChain ecosystem, meaning you take fully advantage of all the LangChain integrations and LangSmith observability.\\n As part of this launch, we\'re also excited to highlight a few applications built on top of LangGraph that utilize the concept of multiple agents.\\n"}, {"url": "https://python.langchain.com/docs/langgraph/", "content": "LangGraph is a library for building stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain . It extends the LangChain Expression Language with the ability to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner. It is inspired by Pregel and Apache Beam ."}]', name='tavily_search_results_json', id='b75137fb-86fa-477c-b4c7-79fec35a80be', tool_call_id='call_xRK1M26mLAGIB0oKJvrivudr')]}
last message: content='I found some information about LangGraph:\n\n1. LangGraph is a library for building stateful, multi-actor applications with LLMs, built on top of LangChain. It extends the LangChain Expression Language with the ability to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner. It is inspired by Pregel and Apache Beam. [Source](https://python.langchain.com/docs/langgraph/)\n\n2. LangGraph allows the creation of LLM workflows containing cycles, which are essential for most agent runtimes. It is available in both Python and JS and is fully integrated into the LangChain ecosystem, providing advantages in terms of integrations and observability. [Source](https://blog.langchain.dev/langgraph-multi-agent-workflows/)\n\n3. In LangGraph, nodes can be functions or runnables, and edges can be conditional based on the output of a node. LangGraph is used for adding cycles to LLM applications and is not a DAG framework. [Source](https://github.com/langchain-ai/langgraph/blob/main/README.md)\n\nThese sources provide more detailed information about LangGraph and its capabilities.' response_metadata={'finish_reason': 'stop'} id='run-4426b55f-4c9f-4fac-b9a9-ce0f98b6a73a-0'
value: {'messages': [AIMessage(content='I found some information about LangGraph:\n\n1. LangGraph is a library for building stateful, multi-actor applications with LLMs, built on top of LangChain. It extends the LangChain Expression Language with the ability to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner. It is inspired by Pregel and Apache Beam. [Source](https://python.langchain.com/docs/langgraph/)\n\n2. LangGraph allows the creation of LLM workflows containing cycles, which are essential for most agent runtimes. It is available in both Python and JS and is fully integrated into the LangChain ecosystem, providing advantages in terms of integrations and observability. [Source](https://blog.langchain.dev/langgraph-multi-agent-workflows/)\n\n3. In LangGraph, nodes can be functions or runnables, and edges can be conditional based on the output of a node. LangGraph is used for adding cycles to LLM applications and is not a DAG framework. [Source](https://github.com/langchain-ai/langgraph/blob/main/README.md)\n\nThese sources provide more detailed information about LangGraph and its capabilities.', response_metadata={'finish_reason': 'stop'}, id='run-4426b55f-4c9f-4fac-b9a9-ce0f98b6a73a-0')]}

如果我们将 graph 改为由 graph_builder.compile(checkpointer=memory, interrupt_before=[“action”]) 创建,即添加 interrupt_before=[“action”] 参数,那么在执行上面的 graph 后,会在执行下一个 action 节点前中断,即执行的结果是:

1
2
last message: content='' additional_kwargs={'tool_calls': [{'index': 0, 'id': 'call_FytVzAxqElqPgkAVeKFVfeJq', 'function': {'arguments': '{"query":"LangGraph"}', 'name': 'tavily_search_results_json'}, 'type': 'function'}]} response_metadata={'finish_reason': 'tool_calls'} id='run-c34ad1a5-2c57-40d2-b49c-e37514be4082-0' tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'LangGraph'}, 'id': 'call_FytVzAxqElqPgkAVeKFVfeJq'}]
value: {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'index': 0, 'id': 'call_FytVzAxqElqPgkAVeKFVfeJq', 'function': {'arguments': '{"query":"LangGraph"}', 'name': 'tavily_search_results_json'}, 'type': 'function'}]}, response_metadata={'finish_reason': 'tool_calls'}, id='run-c34ad1a5-2c57-40d2-b49c-e37514be4082-0', tool_calls=[{'name': 'tavily_search_results_json', 'args': {'query': 'LangGraph'}, 'id': 'call_FytVzAxqElqPgkAVeKFVfeJq'}])]}

可以看到,是没有执行完的,我们查看一下要执行的下一个步骤是什么:

1
2
3
4
5
snapshot = graph.get_state(config)
snapshot.next

existing_message = snapshot.values["messages"][-1]
existing_message.tool_calls

上面的输出分别是:

1
('action',)
1
2
3
[{'name': 'tavily_search_results_json',
'args': {'query': 'LangGraph'},
'id': 'call_FytVzAxqElqPgkAVeKFVfeJq'}]

上面的输出中包含了下一个要执行的步骤和具体的工具调用信息。

我们可以运行下面的代码来继续执行步骤,直到执行完成:

1
2
3
4
events = graph.stream(None, config)
for event in events:
for value in event.values():
print(f"value: {value}")

最后执行的输出结果是:

1
2
3
value: {'messages': [ToolMessage(content='[{"url": "https://python.langchain.com/docs/langgraph/", "content": "LangGraph is a library for building stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain . It extends the LangChain Expression Language with the ability to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner. It is inspired by Pregel and Apache Beam ."}, {"url": "https://github.com/langchain-ai/langgraph/blob/main/README.md", "content": "Define the nodes\\nWe now need to define a few different nodes in our graph.\\\\nIn langgraph, a node can be either a function or a runnable.\\\\nThere are two main nodes we need for this:\\nWe will also need to define some edges.\\\\nSome of these edges may be conditional.\\\\nThe reason they are conditional is that based on the output of a node, one of several paths may be taken.\\\\nThe path that is taken is not known until that node is run (the LLM decides).\\n LangChain.\\\\nIt extends the LangChain Expression Language with the ability to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner.\\\\nIt is inspired by Pregel and Apache Beam.\\\\nThe current interface exposed is one inspired by NetworkX.\\nThe main use is for adding cycles to your LLM application.\\\\nCrucially, this is NOT a DAG framework.\\\\nIf you want to build a DAG, you should use just use LangChain Expression Language.\\n This is a special node representing the end of the graph.\\\\nThis means that anything passed to this node will be the final output of the graph.\\\\nIt can be used in two places:\\nWhen to Use\\nWhen should you use this versus LangChain Expression Language?\\n This method adds a node to the graph.\\\\nIt takes two arguments:\\n.add_edge\\nCreates an edge from one node to the next.\\\\nThis means that output of the first node will be passed to the next node.\\\\nIt takes two arguments.\\n Assuming you have done the above Quick Start, you can build off it like:\\nHere, we manually define the first tool call that we will make.\\\\nNotice that it does that same thing as agent would have done (adds the agent_outcome key).\\\\nThis is so that we can easily plug it in.\\n"}, {"url": "https://blog.langchain.dev/langgraph-multi-agent-workflows/", "content": "As a part of the launch, we highlighted two simple runtimes: one that is the equivalent of the AgentExecutor in langchain, and a second that was a version of that aimed at message passing and chat models.\\n It\'s important to note that these three examples are only a few of the possible examples we could highlight - there are almost assuredly other examples out there and we look forward to seeing what the community comes up with!\\n LangGraph: Multi-Agent Workflows\\nLinks\\nLast week we highlighted LangGraph - a new package (available in both Python and JS) to better enable creation of LLM workflows containing cycles, which are a critical component of most agent runtimes. \\"\\nAnother key difference between Autogen and LangGraph is that LangGraph is fully integrated into the LangChain ecosystem, meaning you take fully advantage of all the LangChain integrations and LangSmith observability.\\n As part of this launch, we\'re also excited to highlight a few applications built on top of LangGraph that utilize the concept of multiple agents.\\n"}]', name='tavily_search_results_json', id='b1f6c679-eecd-44d8-8610-17f1dd5c14d4', tool_call_id='call_FytVzAxqElqPgkAVeKFVfeJq')]}
last message: content='LangGraph is a library for building stateful, multi-actor applications with LLMs, built on top of LangChain. It extends the LangChain Expression Language with the ability to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner. LangGraph is inspired by Pregel and Apache Beam.\n\nYou can find more information about LangGraph on the following links:\n1. [LangGraph Documentation](https://python.langchain.com/docs/langgraph/): Provides details on how LangGraph works and its features.\n2. [LangGraph GitHub Repository](https://github.com/langchain-ai/langgraph/blob/main/README.md): Explains how to define nodes and edges in LangGraph and provides insights into its usage.\n3. [LangGraph Multi-Agent Workflows Blog Post](https://blog.langchain.dev/langgraph-multi-agent-workflows/): Discusses the use of LangGraph for multi-agent workflows and highlights applications built on top of LangGraph.\n\nLangGraph is designed to enable the creation of LLM workflows containing cycles, which are essential for most agent runtimes. It offers a way to coordinate multiple actors in a cyclic manner within LangChain applications.' response_metadata={'finish_reason': 'stop'} id='run-342ae745-57ae-43ff-a274-c8942e55e95a-0'
value: {'messages': [AIMessage(content='LangGraph is a library for building stateful, multi-actor applications with LLMs, built on top of LangChain. It extends the LangChain Expression Language with the ability to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner. LangGraph is inspired by Pregel and Apache Beam.\n\nYou can find more information about LangGraph on the following links:\n1. [LangGraph Documentation](https://python.langchain.com/docs/langgraph/): Provides details on how LangGraph works and its features.\n2. [LangGraph GitHub Repository](https://github.com/langchain-ai/langgraph/blob/main/README.md): Explains how to define nodes and edges in LangGraph and provides insights into its usage.\n3. [LangGraph Multi-Agent Workflows Blog Post](https://blog.langchain.dev/langgraph-multi-agent-workflows/): Discusses the use of LangGraph for multi-agent workflows and highlights applications built on top of LangGraph.\n\nLangGraph is designed to enable the creation of LLM workflows containing cycles, which are essential for most agent runtimes. It offers a way to coordinate multiple actors in a cyclic manner within LangChain applications.', response_metadata={'finish_reason': 'stop'}, id='run-342ae745-57ae-43ff-a274-c8942e55e95a-0')]}

通过上面的例子可以看出,我们可以通过在编译 graph 时添加 interrupt_before 参数来达到中断 agent 运行的目的,在 agent 中断期间,我们可以添加一些其他的操作或者功能,来决定终止 agent 的运行还是继续 agent 的运行。


以上就是 LangGraph 的介绍和使用步骤。

参考文章:https://blog.langchain.dev/langgraph/