langgraph学习--创建基本的agent执行器

本文介绍如何使用langgraph创建一个基本的Agent执行器,主要包括下面几个步骤:
1、定义工具
2、创建langchain Agent(由LLM、tools、prompt三部分组成)
3、定义图形状态
传统的LangChain代理的状态有几个属性:
(1) ’ input ‘:这是一个输入字符串,表示来自用户的主要请求,作为输入传入。
(2)’ chat_history ‘:这是以前的对话消息,也作为输入传入。
(3) ’ intermediate_steps ‘:这是代理在一段时间内采取的操作和相应观察的列表。这在代理的每次迭代中都会更新。
(4)’ agent_outcome’:这是代理的响应,可以是AgentAction,也可以是AgentFinish。当这是一个AgentFinish时,AgentExecutor应该完成,否则它应该调用所请求的工具。
4、定义节点
现在我们需要在图中定义几个不同的节点。在’ langgraph '中,节点可以是函数或可运行的。
我们需要两个主要节点:
(1)代理:负责决定采取什么(如果有的话)行动。
(2)调用工具的函数:如果代理决定采取操作,则该节点将执行该操作。
5、定义边
其中一些边可能是有条件的。它们是有条件的原因是,基于节点的输出,可能会采取几个路径中的一个。在运行该节点之前,所采取的路径是未知的(由LLM决定)。
(1)条件边:在代理被调用后,我们应该:
a.如果代理说要采取行动,那么应该调用调用工具的函数
b.如果代理说完成了,那就应该完成
(2) 正常边:在工具被调用后,它应该总是回到代理来决定下一步做什么
6、编译

代码实现如下

from langchain import hub
from langchain.agents import create_openai_functions_agent
from langchain_openai.chat_models import ChatOpenAI
import os
os.environ["OPENAI_API_KEY"]="sk-XXXXXXXXXX"
os.environ["SERPAPI_API_KEY"] = 'XXXXXXXXXXXXXXXXXXXXX'
from langchain.agents.tools import Tool
from langchain_community.utilities import SerpAPIWrapper
search = SerpAPIWrapper()

search_tool = Tool(
        name = "Search",
        func=search.run,
        description="useful for when you need to answer questions about current events"
    )

tools = [search_tool]

#### Create the LangChain agent
# Get the prompt to use - you can modify this!
prompt = hub.pull("hwchase17/openai-functions-agent")
llm = ChatOpenAI(model="gpt-3.5-turbo-1106", streaming=True)
# Construct the OpenAI Functions agent
agent_runnable = create_openai_functions_agent(llm, tools, prompt)

"""
from langchain.agents import AgentExecutor
agent_executor = AgentExecutor(agent=agent_runnable, tools=tools)
response = agent_executor.invoke({"input": "weather in San Francisco"})
"""

#### Define the graph state
from typing import TypedDict, Annotated, List, Union
from langchain_core.agents import AgentAction, AgentFinish
from langchain_core.messages import BaseMessage
import operator
class AgentState(TypedDict):
   # The input string
   input: str
   # The list of previous messages in the conversation
   chat_history: list[BaseMessage]
   # The outcome of a given call to the agent
   # Needs `None` as a valid type, since this is what this will start as
   agent_outcome: Union[AgentAction, AgentFinish, None]
   # List of actions and corresponding observations
   # Here we annotate this with `operator.add` to indicate that operations to
   # this state should be ADDED to the existing values (not overwrite it)
   intermediate_steps: Annotated[list[tuple[AgentAction, str]], operator.add]

####  Define the nodes
from langchain_core.agents import AgentFinish
from langgraph.prebuilt.tool_executor import ToolExecutor

# It takes in an agent action and calls that tool and returns the result
tool_executor = ToolExecutor(tools)

# Define the agent
def run_agent(data):
    agent_outcome = agent_runnable.invoke(data)
    return {"agent_outcome": agent_outcome}

# Define the function to execute tools
def execute_tools(data):
    # Get the most recent agent_outcome - this is the key added in the `agent` above
    agent_action = data['agent_outcome']
    output = tool_executor.invoke(agent_action)
    return {"intermediate_steps": [(agent_action, str(output))]}

# Define logic that will be used to determine which conditional edge to go down
def should_continue(data):
    if isinstance(data['agent_outcome'], AgentFinish):
        return "end"
    else:
        return "continue"
    
#### Define the graph

from langgraph.graph import END, StateGraph

workflow = StateGraph(AgentState)

workflow.add_node("agent", run_agent)
workflow.add_node("action", execute_tools)
workflow.set_entry_point("agent")

workflow.add_conditional_edges(
    "agent",
    should_continue,
    {
        # If `tools`, then we call the tool node.
        "continue": "action",
        # Otherwise we finish.
        "end": END
    }
)

workflow.add_edge('action', 'agent')

# This compiles it into a LangChain Runnable, meaning you can use it as you would any other runnable
app = workflow.compile()

inputs = {"input": "what is the weather in sf", "chat_history": []}
for s in app.stream(inputs):
    print(list(s.values())[0])
    print("----")

你可能感兴趣的:(LLM,笔记,langchain,langgraph)