LangChain 62 深入理解LangChain 表达式语言25 agents代理 LangChain Expression Language (LCEL)

LangChain系列文章

  1. LangChain 36 深入理解LangChain 表达式语言优势一 LangChain Expression Language (LCEL)
  2. LangChain 37 深入理解LangChain 表达式语言二 实现prompt+model+output parser LangChain Expression Language (LCEL)
  3. LangChain 38 深入理解LangChain 表达式语言三 实现RAG检索增强生成 LangChain Expression Language (LCEL)
  4. LangChain 39 深入理解LangChain 表达式语言四 为什么要用LCEL LangChain Expression Language (LCEL)
  5. LangChain 40 实战Langchain访问OpenAI ChatGPT API Account deactivated的另类方法,访问跳板机API
  6. LangChain 41 深入理解LangChain 表达式语言五 为什么要用LCEL调用大模型LLM LangChain Expression Language (LCEL)
  7. LangChain 42 深入理解LangChain 表达式语言六 Runtime调用不同大模型LLM LangChain Expression Language (LCEL)
  8. LangChain 43 深入理解LangChain 表达式语言七 日志和Fallbacks异常备选方案 LangChain Expression Language (LCEL)
  9. LangChain 44 深入理解LangChain 表达式语言八 Runnable接口输入输出模式 LangChain Expression Language (LCEL)
  10. LangChain 45 深入理解LangChain 表达式语言九 Runnable 调用、流输出、批量调用、异步处理 LangChain Expression Language (LCEL)
  11. LangChain 46 深入理解LangChain 表达式语言十 Runnable 调用中间状态调试日志 LangChain Expression Language (LCEL)
  12. LangChain 47 深入理解LangChain 表达式语言十一 Runnable 并行处理 LangChain Expression Language (LCEL)
  13. LangChain 48 终极解决 实战Langchain访问OpenAI ChatGPT API Account deactivated的另类方法,访问跳板机API
  14. LangChain 49 深入理解LangChain 表达式语言十二 Runnable 透传数据保持输入不变 LangChain Expression Language (LCEL)
  15. LangChain 50 深入理解LangChain 表达式语言十三 自定义pipeline函数 LangChain Expression Language (LCEL)
  16. LangChain 51 深入理解LangChain 表达式语言十四 自动修复配置RunnableConfig LangChain Expression Language (LCEL)
  17. LangChain 52 深入理解LangChain 表达式语言十五 Bind runtime args绑定运行时参数 LangChain Expression Language (LCEL)
  18. LangChain 53 深入理解LangChain 表达式语言十六 Dynamically route动态路由 LangChain Expression Language (LCEL)
  19. LangChain 54 深入理解LangChain 表达式语言十七 Chains Route动态路由 LangChain Expression Language (LCEL)
  20. LangChain 55 深入理解LangChain 表达式语言十八 function Route自定义动态路由 LangChain Expression Language (LCEL)
  21. LangChain 56 深入理解LangChain 表达式语言十九 config运行时选择大模型LLM LangChain Expression Language (LCEL)
  22. LangChain 57 深入理解LangChain 表达式语言二十 LLM Fallbacks速率限制备份大模型 LangChain Expression Language (LCEL)
  23. LangChain 58 深入理解LangChain 表达式语言21 Memory消息历史 LangChain Expression Language (LCEL)
  24. LangChain 59 深入理解LangChain 表达式语言22 multiple chains多个链交互 LangChain Expression Language (LCEL)
  25. LangChain 60 深入理解LangChain 表达式语言23 multiple chains链透传参数 LangChain Expression Language (LCEL)
  26. LangChain 61 深入理解LangChain 表达式语言24 multiple chains链透传参数 LangChain Expression Language (LCEL)

在这里插入图片描述

1. Agents

您可以将一个可运行的任务传递给Agents。

通常情况下,从可运行的代理程序构建一个代理需要几个步骤:

  1. 中间步骤的数据处理。这些数据需要以语言模型能够识别的方式表示。这应该与提示中的指令紧密耦合。
  2. 提示本身
  3. 模型,如果需要的话,需要包括停止标记
  4. 输出解析器 - 应该与提示中指定的格式一致。
from langchain.prompts import PromptTemplate
from langchain_community.chat_models import ChatOpenAI
from langchain_core.runnables import ConfigurableField
# We add in a string output parser here so the outputs between the two are the same type
from langchain_core.output_parsers import StrOutputParser
from langchain.prompts import ChatPromptTemplate
# Now lets create a chain with the normal OpenAI model
from langchain_community.llms import OpenAI
from operator import itemgetter

from langchain import hub
from langchain.agents import AgentExecutor, tool
from langchain.agents.output_parsers import XMLAgentOutputParser
from langchain_core.runnables import RunnablePassthrough

from dotenv import load_dotenv  # 导入从 .env 文件加载环境变量的函数
load_dotenv()  # 调用函数实际加载环境变量

from langchain.globals import set_debug  # 导入在 langchain 中设置调试模式的函数
set_debug(True)  # 启用 langchain 的调试模式

model = ChatOpenAI()
@tool
def search(query: str) -> str:
    """Search things about current events."""
    return "32 degrees"

tool_list = [search]
# Get the prompt to use - you can modify this!
prompt = hub.pull("hwchase17/xml-agent-convo")
# Logic for going from intermediate steps to a string to pass into model
# This is pretty tied to the prompt
def convert_intermediate_steps(intermediate_steps):
    log = ""
    for action, observation in intermediate_steps:
        log += (
            f"{action.tool}{action.tool_input}"
            f"{observation}"
        )
    return log

# Logic for converting tools to string to go in prompt
def convert_tools(tools):
    return "\n".join([f"{tool.name}: {tool.description}" for tool in tools])

agent = (
    {
        "input": lambda x: x["input"],
        "agent_scratchpad": lambda x: convert_intermediate_steps(
            x["intermediate_steps"]
        ),
    }
    | prompt.partial(tools=convert_tools(tool_list))
    | model.bind(stop=["", ""])
    | XMLAgentOutputParser()
)

agent_executor = AgentExecutor(agent=agent, tools=tool_list, verbose=True)
response = agent_executor.invoke({"input": "whats the weather in New york?"})

print('response >> ', response)

打印agent_executor

> Entering new AgentExecutor chain...
 <tool>search</tool><tool_input>weather in New York32 degrees <tool>search</tool>
<tool_input>weather in New York32 degrees <final_answer>The weather in New York is 32 degrees

> Finished chain.

预期结果输出

{'input': 'whats the weather in New york?',
 'output': 'The weather in New York is 32 degrees'}

实际运行,output解析还是有点错误

(.venv) zgpeace@zgpeaces-MacBook-Pro git:(develop)[1] % python LCEL/agents.py                           ~/Workspace/LLM/langchain-llm-app
[chain/start] [1:chain:AgentExecutor] Entering Chain run with input:
{
  "input": "whats the weather in New york?"
}
[chain/start] [1:chain:AgentExecutor > 2:chain:RunnableSequence] Entering Chain run with input:
{
  "input": "whats the weather in New york?",
  "intermediate_steps": []
}
[chain/start] [1:chain:AgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableParallel<input,agent_scratchpad>] Entering Chain run with input:
{
  "input": "whats the weather in New york?",
  "intermediate_steps": []
}
[chain/start] [1:chain:AgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableParallel<input,agent_scratchpad> > 4:chain:<lambda>] Entering Chain run with input:
{
  "input": "whats the weather in New york?",
  "intermediate_steps": []
}
[chain/start] [1:chain:AgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableParallel<input,agent_scratchpad> > 5:chain:<lambda>] Entering Chain run with input:
{
  "input": "whats the weather in New york?",
  "intermediate_steps": []
}
[chain/end] [1:chain:AgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableParallel<input,agent_scratchpad> > 4:chain:<lambda>] [9ms] Exiting Chain run with output:
{
  "output": "whats the weather in New york?"
}
[chain/end] [1:chain:AgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableParallel<input,agent_scratchpad> > 5:chain:<lambda>] [17ms] Exiting Chain run with output:
{
  "output": ""
}
[chain/end] [1:chain:AgentExecutor > 2:chain:RunnableSequence > 3:chain:RunnableParallel<input,agent_scratchpad>] [58ms] Exiting Chain run with output:
{
  "input": "whats the weather in New york?",
  "agent_scratchpad": ""
}
[chain/start] [1:chain:AgentExecutor > 2:chain:RunnableSequence > 6:prompt:ChatPromptTemplate] Entering Prompt run with input:
{
  "input": "whats the weather in New york?",
  "agent_scratchpad": ""
}
[chain/end] [1:chain:AgentExecutor > 2:chain:RunnableSequence > 6:prompt:ChatPromptTemplate] [3ms] Exiting Prompt run with output:
{
  "lc": 1,
  "type": "constructor",
  "id": [
    "langchain",
    "prompts",
    "chat",
    "ChatPromptValue"
  ],
  "kwargs": {
    "messages": [
      {
        "lc": 1,
        "type": "constructor",
        "id": [
          "langchain",
          "schema",
          "messages",
          "HumanMessage"
        ],
        "kwargs": {
          "content": "You are a helpful assistant. Help the user answer any questions.\n\nYou have access to the following tools:\n\nsearch: search(query: str) -> str - Search things about current events.\n\nIn order to use a tool, you can use  and  tags. You will then get back a response in the form \nFor example, if you have a tool called 'search' that could run a google search, in order to search for the weather in SF you would respond:\n\nsearchweather in SF\n64 degrees\n\nWhen you are done, respond with a final answer between . For example:\n\nThe weather in SF is 64 degrees\n\nBegin!\n\nPrevious Conversation:\n\n\nQuestion: whats the weather in New york?\n",
          "additional_kwargs": {}
        }
      }
    ]
  }
}
[llm/start] [1:chain:AgentExecutor > 2:chain:RunnableSequence > 7:llm:ChatOpenAI] Entering LLM run with input:
{
  "prompts": [
    "Human: You are a helpful assistant. Help the user answer any questions.\n\nYou have access to the following tools:\n\nsearch: search(query: str) -> str - Search things about current events.\n\nIn order to use a tool, you can use  and  tags. You will then get back a response in the form \nFor example, if you have a tool called 'search' that could run a google search, in order to search for the weather in SF you would respond:\n\nsearchweather in SF\n64 degrees\n\nWhen you are done, respond with a final answer between . For example:\n\nThe weather in SF is 64 degrees\n\nBegin!\n\nPrevious Conversation:\n\n\nQuestion: whats the weather in New york?"
  ]
}
[llm/end] [1:chain:AgentExecutor > 2:chain:RunnableSequence > 7:llm:ChatOpenAI] [2.17s] Exiting LLM run with output:
{
  "generations": [
    [
      {
        "text": "searchweather in New York",
        "generation_info": {
          "finish_reason": "stop",
          "logprobs": null
        },
        "type": "ChatGeneration",
        "message": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain",
            "schema",
            "messages",
            "AIMessage"
          ],
          "kwargs": {
            "content": "searchweather in New York",
            "additional_kwargs": {}
          }
        }
      }
    ]
  ],
  "llm_output": {
    "token_usage": {
      "completion_tokens": 14,
      "prompt_tokens": 191,
      "total_tokens": 205
    },
    "model_name": "gpt-3.5-turbo",
    "system_fingerprint": null
  },
  "run": null
}
[chain/start] [1:chain:AgentExecutor > 2:chain:RunnableSequence > 8:parser:XMLAgentOutputParser] Entering Parser run with input:
[inputs]
[chain/end] [1:chain:AgentExecutor > 2:chain:RunnableSequence > 8:parser:XMLAgentOutputParser] [1ms] Exiting Parser run with output:
{
  "lc": 1,
  "type": "constructor",
  "id": [
    "langchain",
    "schema",
    "agent",
    "AgentAction"
  ],
  "kwargs": {
    "tool": "search",
    "tool_input": "weather in New York",
    "log": "searchweather in New York"
  }
}
[chain/end] [1:chain:AgentExecutor > 2:chain:RunnableSequence] [2.25s] Exiting Chain run with output:
[outputs]
[tool/start] [1:chain:AgentExecutor > 9:tool:search] Entering Tool run with input:
"weather in New York"
[tool/end] [1:chain:AgentExecutor > 9:tool:search] [0ms] Exiting Tool run with output:
"32 degrees"
[chain/start] [1:chain:AgentExecutor > 10:chain:RunnableSequence] Entering Chain run with input:
[inputs]
[chain/start] [1:chain:AgentExecutor > 10:chain:RunnableSequence > 11:chain:RunnableParallel<input,agent_scratchpad>] Entering Chain run with input:
[inputs]
[chain/start] [1:chain:AgentExecutor > 10:chain:RunnableSequence > 11:chain:RunnableParallel<input,agent_scratchpad> > 12:chain:<lambda>] Entering Chain run with input:
[inputs]
[chain/end] [1:chain:AgentExecutor > 10:chain:RunnableSequence > 11:chain:RunnableParallel<input,agent_scratchpad> > 12:chain:<lambda>] [3ms] Exiting Chain run with output:
{
  "output": "whats the weather in New york?"
}
[chain/start] [1:chain:AgentExecutor > 10:chain:RunnableSequence > 11:chain:RunnableParallel<input,agent_scratchpad> > 13:chain:<lambda>] Entering Chain run with input:
[inputs]
[chain/end] [1:chain:AgentExecutor > 10:chain:RunnableSequence > 11:chain:RunnableParallel<input,agent_scratchpad> > 13:chain:<lambda>] [8ms] Exiting Chain run with output:
{
  "output": "searchweather in New York32 degrees"
}
[chain/end] [1:chain:AgentExecutor > 10:chain:RunnableSequence > 11:chain:RunnableParallel<input,agent_scratchpad>] [18ms] Exiting Chain run with output:
{
  "input": "whats the weather in New york?",
  "agent_scratchpad": "searchweather in New York32 degrees"
}
[chain/start] [1:chain:AgentExecutor > 10:chain:RunnableSequence > 14:prompt:ChatPromptTemplate] Entering Prompt run with input:
{
  "input": "whats the weather in New york?",
  "agent_scratchpad": "searchweather in New York32 degrees"
}
[chain/end] [1:chain:AgentExecutor > 10:chain:RunnableSequence > 14:prompt:ChatPromptTemplate] [1ms] Exiting Prompt run with output:
{
  "lc": 1,
  "type": "constructor",
  "id": [
    "langchain",
    "prompts",
    "chat",
    "ChatPromptValue"
  ],
  "kwargs": {
    "messages": [
      {
        "lc": 1,
        "type": "constructor",
        "id": [
          "langchain",
          "schema",
          "messages",
          "HumanMessage"
        ],
        "kwargs": {
          "content": "You are a helpful assistant. Help the user answer any questions.\n\nYou have access to the following tools:\n\nsearch: search(query: str) -> str - Search things about current events.\n\nIn order to use a tool, you can use  and  tags. You will then get back a response in the form \nFor example, if you have a tool called 'search' that could run a google search, in order to search for the weather in SF you would respond:\n\nsearchweather in SF\n64 degrees\n\nWhen you are done, respond with a final answer between . For example:\n\nThe weather in SF is 64 degrees\n\nBegin!\n\nPrevious Conversation:\n\n\nQuestion: whats the weather in New york?\nsearchweather in New York32 degrees",
          "additional_kwargs": {}
        }
      }
    ]
  }
}
[llm/start] [1:chain:AgentExecutor > 10:chain:RunnableSequence > 15:llm:ChatOpenAI] Entering LLM run with input:
{
  "prompts": [
    "Human: You are a helpful assistant. Help the user answer any questions.\n\nYou have access to the following tools:\n\nsearch: search(query: str) -> str - Search things about current events.\n\nIn order to use a tool, you can use  and  tags. You will then get back a response in the form \nFor example, if you have a tool called 'search' that could run a google search, in order to search for the weather in SF you would respond:\n\nsearchweather in SF\n64 degrees\n\nWhen you are done, respond with a final answer between . For example:\n\nThe weather in SF is 64 degrees\n\nBegin!\n\nPrevious Conversation:\n\n\nQuestion: whats the weather in New york?\nsearchweather in New York32 degrees"
  ]
}
[llm/end] [1:chain:AgentExecutor > 10:chain:RunnableSequence > 15:llm:ChatOpenAI] [1.07s] Exiting LLM run with output:
{
  "generations": [
    [
      {
        "text": "The weather in New York is 32 degrees.",
        "generation_info": {
          "finish_reason": "stop",
          "logprobs": null
        },
        "type": "ChatGeneration",
        "message": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain",
            "schema",
            "messages",
            "AIMessage"
          ],
          "kwargs": {
            "content": "The weather in New York is 32 degrees.",
            "additional_kwargs": {}
          }
        }
      }
    ]
  ],
  "llm_output": {
    "token_usage": {
      "completion_tokens": 10,
      "prompt_tokens": 216,
      "total_tokens": 226
    },
    "model_name": "gpt-3.5-turbo",
    "system_fingerprint": null
  },
  "run": null
}
[chain/start] [1:chain:AgentExecutor > 10:chain:RunnableSequence > 16:parser:XMLAgentOutputParser] Entering Parser run with input:
[inputs]
[chain/error] [1:chain:AgentExecutor > 10:chain:RunnableSequence > 16:parser:XMLAgentOutputParser] [8ms] Parser run errored with error:
"ValueError()Traceback (most recent call last):\n\n\n  File \"/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py\", line 975, in _call_with_config\n    context.run(\n\n\n  File \"/usr/local/lib/python3.11/site-packages/langchain_core/runnables/config.py\", line 323, in call_func_with_variable_args\n    return func(input, **kwargs)  # type: ignore[call-arg]\n           ^^^^^^^^^^^^^^^^^^^^^\n\n\n  File \"/usr/local/lib/python3.11/site-packages/langchain_core/output_parsers/base.py\", line 168, in \n    lambda inner_input: self.parse_result(\n                        ^^^^^^^^^^^^^^^^^^\n\n\n  File \"/usr/local/lib/python3.11/site-packages/langchain_core/output_parsers/base.py\", line 219, in parse_result\n    return self.parse(result[0].text)\n           ^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\n  File \"/usr/local/lib/python3.11/site-packages/langchain/agents/output_parsers/xml.py\", line 45, in parse\n    raise ValueError\n\n\nValueError"
[chain/error] [1:chain:AgentExecutor > 10:chain:RunnableSequence] [1.10s] Chain run errored with error:
"ValueError()Traceback (most recent call last):\n\n\n  File \"/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py\", line 1762, in invoke\n    input = step.invoke(\n            ^^^^^^^^^^^^\n\n\n  File \"/usr/local/lib/python3.11/site-packages/langchain_core/output_parsers/base.py\", line 167, in invoke\n    return self._call_with_config(\n           ^^^^^^^^^^^^^^^^^^^^^^^\n\n\n  File \"/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py\", line 975, in _call_with_config\n    context.run(\n\n\n  File \"/usr/local/lib/python3.11/site-packages/langchain_core/runnables/config.py\", line 323, in call_func_with_variable_args\n    return func(input, **kwargs)  # type: ignore[call-arg]\n           ^^^^^^^^^^^^^^^^^^^^^\n\n\n  File \"/usr/local/lib/python3.11/site-packages/langchain_core/output_parsers/base.py\", line 168, in \n    lambda inner_input: self.parse_result(\n                        ^^^^^^^^^^^^^^^^^^\n\n\n  File \"/usr/local/lib/python3.11/site-packages/langchain_core/output_parsers/base.py\", line 219, in parse_result\n    return self.parse(result[0].text)\n           ^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\n  File \"/usr/local/lib/python3.11/site-packages/langchain/agents/output_parsers/xml.py\", line 45, in parse\n    raise ValueError\n\n\nValueError"
[chain/error] [1:chain:AgentExecutor] [3.39s] Chain run errored with error:
"ValueError()Traceback (most recent call last):\n\n\n  File \"/usr/local/lib/python3.11/site-packages/langchain/chains/base.py\", line 310, in __call__\n    self._call(inputs, run_manager=run_manager)\n\n\n  File \"/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py\", line 1312, in _call\n    next_step_output = self._take_next_step(\n                       ^^^^^^^^^^^^^^^^^^^^^\n\n\n  File \"/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py\", line 1038, in _take_next_step\n    [\n\n\n  File \"/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py\", line 1038, in \n    [\n\n\n  File \"/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py\", line 1066, in _iter_next_step\n    output = self.agent.plan(\n             ^^^^^^^^^^^^^^^^\n\n\n  File \"/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py\", line 385, in plan\n    output = self.runnable.invoke(inputs, config={\"callbacks\": callbacks})\n             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\n  File \"/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py\", line 1762, in invoke\n    input = step.invoke(\n            ^^^^^^^^^^^^\n\n\n  File \"/usr/local/lib/python3.11/site-packages/langchain_core/output_parsers/base.py\", line 167, in invoke\n    return self._call_with_config(\n           ^^^^^^^^^^^^^^^^^^^^^^^\n\n\n  File \"/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py\", line 975, in _call_with_config\n    context.run(\n\n\n  File \"/usr/local/lib/python3.11/site-packages/langchain_core/runnables/config.py\", line 323, in call_func_with_variable_args\n    return func(input, **kwargs)  # type: ignore[call-arg]\n           ^^^^^^^^^^^^^^^^^^^^^\n\n\n  File \"/usr/local/lib/python3.11/site-packages/langchain_core/output_parsers/base.py\", line 168, in \n    lambda inner_input: self.parse_result(\n                        ^^^^^^^^^^^^^^^^^^\n\n\n  File \"/usr/local/lib/python3.11/site-packages/langchain_core/output_parsers/base.py\", line 219, in parse_result\n    return self.parse(result[0].text)\n           ^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\n  File \"/usr/local/lib/python3.11/site-packages/langchain/agents/output_parsers/xml.py\", line 45, in parse\n    raise ValueError\n\n\nValueError"
Traceback (most recent call last):
  File "/Users/zgpeace/Workspace/LLM/langchain-llm-app/LCEL/agents.py", line 59, in <module>
    response = agent_executor.invoke({"input": "whats the weather in New york?"})
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 93, in invoke
    return self(
           ^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 316, in __call__
    raise e
  File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 310, in __call__
    self._call(inputs, run_manager=run_manager)
  File "/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py", line 1312, in _call
    next_step_output = self._take_next_step(
                       ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py", line 1038, in _take_next_step
    [
  File "/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py", line 1038, in <listcomp>
    [
  File "/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py", line 1066, in _iter_next_step
    output = self.agent.plan(
             ^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py", line 385, in plan
    output = self.runnable.invoke(inputs, config={"callbacks": callbacks})
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1762, in invoke
    input = step.invoke(
            ^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain_core/output_parsers/base.py", line 167, in invoke
    return self._call_with_config(
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 975, in _call_with_config
    context.run(
  File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 323, in call_func_with_variable_args
    return func(input, **kwargs)  # type: ignore[call-arg]
           ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain_core/output_parsers/base.py", line 168, in <lambda>
    lambda inner_input: self.parse_result(
                        ^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain_core/output_parsers/base.py", line 219, in parse_result
    return self.parse(result[0].text)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain/agents/output_parsers/xml.py", line 45, in parse
    raise ValueError
ValueError

代码

https://github.com/zgpeace/pets-name-langchain/tree/develop

参考

https://python.langchain.com/docs/expression_language/cookbook/agent

你可能感兴趣的:(LLM-Large,Language,Models,langchain,chatgpt,LLM,prompt,语言模型)