LangChain 60 深入理解LangChain 表达式语言23 multiple chains链透传参数 LangChain Expression Language (LCEL)

LangChain系列文章

  1. LangChain 36 深入理解LangChain 表达式语言优势一 LangChain Expression Language (LCEL)
  2. LangChain 37 深入理解LangChain 表达式语言二 实现prompt+model+output parser LangChain Expression Language (LCEL)
  3. LangChain 38 深入理解LangChain 表达式语言三 实现RAG检索增强生成 LangChain Expression Language (LCEL)
  4. LangChain 39 深入理解LangChain 表达式语言四 为什么要用LCEL LangChain Expression Language (LCEL)
  5. LangChain 40 实战Langchain访问OpenAI ChatGPT API Account deactivated的另类方法,访问跳板机API
  6. LangChain 41 深入理解LangChain 表达式语言五 为什么要用LCEL调用大模型LLM LangChain Expression Language (LCEL)
  7. LangChain 42 深入理解LangChain 表达式语言六 Runtime调用不同大模型LLM LangChain Expression Language (LCEL)
  8. LangChain 43 深入理解LangChain 表达式语言七 日志和Fallbacks异常备选方案 LangChain Expression Language (LCEL)
  9. LangChain 44 深入理解LangChain 表达式语言八 Runnable接口输入输出模式 LangChain Expression Language (LCEL)
  10. LangChain 45 深入理解LangChain 表达式语言九 Runnable 调用、流输出、批量调用、异步处理 LangChain Expression Language (LCEL)
  11. LangChain 46 深入理解LangChain 表达式语言十 Runnable 调用中间状态调试日志 LangChain Expression Language (LCEL)
  12. LangChain 47 深入理解LangChain 表达式语言十一 Runnable 并行处理 LangChain Expression Language (LCEL)
  13. LangChain 48 终极解决 实战Langchain访问OpenAI ChatGPT API Account deactivated的另类方法,访问跳板机API
  14. LangChain 49 深入理解LangChain 表达式语言十二 Runnable 透传数据保持输入不变 LangChain Expression Language (LCEL)
  15. LangChain 50 深入理解LangChain 表达式语言十三 自定义pipeline函数 LangChain Expression Language (LCEL)
  16. LangChain 51 深入理解LangChain 表达式语言十四 自动修复配置RunnableConfig LangChain Expression Language (LCEL)
  17. LangChain 52 深入理解LangChain 表达式语言十五 Bind runtime args绑定运行时参数 LangChain Expression Language (LCEL)
  18. LangChain 53 深入理解LangChain 表达式语言十六 Dynamically route动态路由 LangChain Expression Language (LCEL)
  19. LangChain 54 深入理解LangChain 表达式语言十七 Chains Route动态路由 LangChain Expression Language (LCEL)
  20. LangChain 55 深入理解LangChain 表达式语言十八 function Route自定义动态路由 LangChain Expression Language (LCEL)
  21. LangChain 56 深入理解LangChain 表达式语言十九 config运行时选择大模型LLM LangChain Expression Language (LCEL)
  22. LangChain 57 深入理解LangChain 表达式语言二十 LLM Fallbacks速率限制备份大模型 LangChain Expression Language (LCEL)
  23. LangChain 58 深入理解LangChain 表达式语言21 Memory消息历史 LangChain Expression Language (LCEL)
  24. LangChain 59 深入理解LangChain 表达式语言22 multiple chains多个链交互 LangChain Expression Language (LCEL)

在这里插入图片描述

multiple chains多个链交互

可运行的任务可以轻松地用来串联多个链。
多个链之间交互,第四个链接受来自第二和第三个链的输出

from langchain.prompts import PromptTemplate
from langchain_community.chat_models import ChatOpenAI
from langchain_core.runnables import ConfigurableField
# We add in a string output parser here so the outputs between the two are the same type
from langchain_core.output_parsers import StrOutputParser
from langchain.prompts import ChatPromptTemplate
# Now lets create a chain with the normal OpenAI model
from langchain_community.llms import OpenAI
from operator import itemgetter

from langchain.prompts import ChatPromptTemplate
from langchain.schema import StrOutputParser
from langchain_community.chat_models import ChatOpenAI

from dotenv import load_dotenv  # 导入从 .env 文件加载环境变量的函数
load_dotenv()  # 调用函数实际加载环境变量

from langchain.globals import set_debug  # 导入在 langchain 中设置调试模式的函数
set_debug(True)  # 启用 langchain 的调试模式

from langchain_core.runnables import RunnablePassthrough

prompt1 = ChatPromptTemplate.from_template(
    "generate a {attribute} color. Return the name of the color and nothing else:"
)
prompt2 = ChatPromptTemplate.from_template(
    "what is a fruit of color: {color}. Return the name of the fruit and nothing else:"
)
prompt3 = ChatPromptTemplate.from_template(
    "what is a country with a flag that has the color: {color}. Return the name of the country and nothing else:"
)
prompt4 = ChatPromptTemplate.from_template(
    "What is the color of {fruit} and the flag of {country}?"
)
model = ChatOpenAI()
model_parser = model | StrOutputParser()

color_generator = (
    {"attribute": RunnablePassthrough()} | prompt1 | {"color": model_parser}
)
color_to_fruit = prompt2 | model_parser
color_to_country = prompt3 | model_parser
question_generator = (
    color_generator | {"fruit": color_to_fruit, "country": color_to_country} | prompt4
)
prompt = question_generator.invoke("warm")
print('prompt >> ', prompt)
response = model.invoke(prompt)
print('response >> ', response)

输出

(.venv) zgpeace@zgpeaces-MacBook-Pro git:(develop)[2] % python LCEL/chains_mul2.py                  ~/Workspace/LLM/langchain-llm-app
[chain/start] [1:chain:RunnableSequence] Entering Chain run with input:
{
  "input": "warm"
}
[chain/start] [1:chain:RunnableSequence > 2:chain:RunnableParallel<attribute>] Entering Chain run with input:
{
  "input": "warm"
}
[chain/start] [1:chain:RunnableSequence > 2:chain:RunnableParallel<attribute> > 3:chain:RunnablePassthrough] Entering Chain run with input:
{
  "input": "warm"
}
[chain/end] [1:chain:RunnableSequence > 2:chain:RunnableParallel<attribute> > 3:chain:RunnablePassthrough] [3ms] Exiting Chain run with output:
{
  "output": "warm"
}
[chain/end] [1:chain:RunnableSequence > 2:chain:RunnableParallel<attribute>] [12ms] Exiting Chain run with output:
{
  "attribute": "warm"
}
[chain/start] [1:chain:RunnableSequence > 4:prompt:ChatPromptTemplate] Entering Prompt run with input:
{
  "attribute": "warm"
}
[chain/end] [1:chain:RunnableSequence > 4:prompt:ChatPromptTemplate] [2ms] Exiting Prompt run with output:
{
  "lc": 1,
  "type": "constructor",
  "id": [
    "langchain",
    "prompts",
    "chat",
    "ChatPromptValue"
  ],
  "kwargs": {
    "messages": [
      {
        "lc": 1,
        "type": "constructor",
        "id": [
          "langchain",
          "schema",
          "messages",
          "HumanMessage"
        ],
        "kwargs": {
          "content": "generate a warm color. Return the name of the color and nothing else:",
          "additional_kwargs": {}
        }
      }
    ]
  }
}
[chain/start] [1:chain:RunnableSequence > 5:chain:RunnableParallel<color>] Entering Chain run with input:
[inputs]
[chain/start] [1:chain:RunnableSequence > 5:chain:RunnableParallel<color> > 6:chain:RunnableSequence] Entering Chain run with input:
[inputs]
[llm/start] [1:chain:RunnableSequence > 5:chain:RunnableParallel<color> > 6:chain:RunnableSequence > 7:llm:ChatOpenAI] Entering LLM run with input:
{
  "prompts": [
    "Human: generate a warm color. Return the name of the color and nothing else:"
  ]
}
[llm/end] [1:chain:RunnableSequence > 5:chain:RunnableParallel<color> > 6:chain:RunnableSequence > 7:llm:ChatOpenAI] [4.06s] Exiting LLM run with output:
{
  "generations": [
    [
      {
        "text": "Red",
        "generation_info": {
          "finish_reason": "stop",
          "logprobs": null
        },
        "type": "ChatGeneration",
        "message": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain",
            "schema",
            "messages",
            "AIMessage"
          ],
          "kwargs": {
            "content": "Red",
            "additional_kwargs": {}
          }
        }
      }
    ]
  ],
  "llm_output": {
    "token_usage": {
      "completion_tokens": 1,
      "prompt_tokens": 22,
      "total_tokens": 23
    },
    "model_name": "gpt-3.5-turbo",
    "system_fingerprint": null
  },
  "run": null
}
[chain/start] [1:chain:RunnableSequence > 5:chain:RunnableParallel<color> > 6:chain:RunnableSequence > 8:parser:StrOutputParser] Entering Parser run with input:
[inputs]
[chain/end] [1:chain:RunnableSequence > 5:chain:RunnableParallel<color> > 6:chain:RunnableSequence > 8:parser:StrOutputParser] [1ms] Exiting Parser run with output:
{
  "output": "Red"
}
[chain/end] [1:chain:RunnableSequence > 5:chain:RunnableParallel<color> > 6:chain:RunnableSequence] [4.07s] Exiting Chain run with output:
{
  "output": "Red"
}
[chain/end] [1:chain:RunnableSequence > 5:chain:RunnableParallel<color>] [4.07s] Exiting Chain run with output:
{
  "color": "Red"
}
[chain/start] [1:chain:RunnableSequence > 9:chain:RunnableParallel<fruit,country>] Entering Chain run with input:
{
  "color": "Red"
}
[chain/start] [1:chain:RunnableSequence > 9:chain:RunnableParallel<fruit,country> > 10:chain:RunnableSequence] Entering Chain run with input:
{
  "color": "Red"
}
[chain/start] [1:chain:RunnableSequence > 9:chain:RunnableParallel<fruit,country> > 10:chain:RunnableSequence > 11:prompt:ChatPromptTemplate] Entering Prompt run with input:
{
  "color": "Red"
}
[chain/start] [1:chain:RunnableSequence > 9:chain:RunnableParallel<fruit,country> > 11:chain:RunnableSequence] Entering Chain run with input:
{
  "color": "Red"
}
[chain/end] [1:chain:RunnableSequence > 9:chain:RunnableParallel<fruit,country> > 10:chain:RunnableSequence > 11:prompt:ChatPromptTemplate] [8ms] Exiting Prompt run with output:
{
  "lc": 1,
  "type": "constructor",
  "id": [
    "langchain",
    "prompts",
    "chat",
    "ChatPromptValue"
  ],
  "kwargs": {
    "messages": [
      {
        "lc": 1,
        "type": "constructor",
        "id": [
          "langchain",
          "schema",
          "messages",
          "HumanMessage"
        ],
        "kwargs": {
          "content": "what is a fruit of color: Red. Return the name of the fruit and nothing else:",
          "additional_kwargs": {}
        }
      }
    ]
  }
}
[chain/start] [1:chain:RunnableSequence > 9:chain:RunnableParallel<fruit,country> > 11:chain:RunnableSequence > 12:prompt:ChatPromptTemplate] Entering Prompt run with input:
{
  "color": "Red"
}
[llm/start] [1:chain:RunnableSequence > 9:chain:RunnableParallel<fruit,country> > 10:chain:RunnableSequence > 12:llm:ChatOpenAI] Entering LLM run with input:
{
  "prompts": [
    "Human: what is a fruit of color: Red. Return the name of the fruit and nothing else:"
  ]
}
[chain/end] [1:chain:RunnableSequence > 9:chain:RunnableParallel<fruit,country> > 11:chain:RunnableSequence > 12:prompt:ChatPromptTemplate] [7ms] Exiting Prompt run with output:
{
  "lc": 1,
  "type": "constructor",
  "id": [
    "langchain",
    "prompts",
    "chat",
    "ChatPromptValue"
  ],
  "kwargs": {
    "messages": [
      {
        "lc": 1,
        "type": "constructor",
        "id": [
          "langchain",
          "schema",
          "messages",
          "HumanMessage"
        ],
        "kwargs": {
          "content": "what is a country with a flag that has the color: Red. Return the name of the country and nothing else:",
          "additional_kwargs": {}
        }
      }
    ]
  }
}
[llm/start] [1:chain:RunnableSequence > 9:chain:RunnableParallel<fruit,country> > 11:chain:RunnableSequence > 13:llm:ChatOpenAI] Entering LLM run with input:
{
  "prompts": [
    "Human: what is a country with a flag that has the color: Red. Return the name of the country and nothing else:"
  ]
}
[llm/end] [1:chain:RunnableSequence > 9:chain:RunnableParallel<fruit,country> > 10:chain:RunnableSequence > 12:llm:ChatOpenAI] [772ms] Exiting LLM run with output:
{
  "generations": [
    [
      {
        "text": "Strawberry.",
        "generation_info": {
          "finish_reason": "stop",
          "logprobs": null
        },
        "type": "ChatGeneration",
        "message": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain",
            "schema",
            "messages",
            "AIMessage"
          ],
          "kwargs": {
            "content": "Strawberry.",
            "additional_kwargs": {}
          }
        }
      }
    ]
  ],
  "llm_output": {
    "token_usage": {
      "completion_tokens": 4,
      "prompt_tokens": 26,
      "total_tokens": 30
    },
    "model_name": "gpt-3.5-turbo",
    "system_fingerprint": null
  },
  "run": null
}
[chain/start] [1:chain:RunnableSequence > 9:chain:RunnableParallel<fruit,country> > 10:chain:RunnableSequence > 13:parser:StrOutputParser] Entering Parser run with input:
[inputs]
[chain/end] [1:chain:RunnableSequence > 9:chain:RunnableParallel<fruit,country> > 10:chain:RunnableSequence > 13:parser:StrOutputParser] [1ms] Exiting Parser run with output:
{
  "output": "Strawberry."
}
[chain/end] [1:chain:RunnableSequence > 9:chain:RunnableParallel<fruit,country> > 10:chain:RunnableSequence] [789ms] Exiting Chain run with output:
{
  "output": "Strawberry."
}
[llm/end] [1:chain:RunnableSequence > 9:chain:RunnableParallel<fruit,country> > 11:chain:RunnableSequence > 13:llm:ChatOpenAI] [2.42s] Exiting LLM run with output:
{
  "generations": [
    [
      {
        "text": "China",
        "generation_info": {
          "finish_reason": "stop",
          "logprobs": null
        },
        "type": "ChatGeneration",
        "message": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain",
            "schema",
            "messages",
            "AIMessage"
          ],
          "kwargs": {
            "content": "China",
            "additional_kwargs": {}
          }
        }
      }
    ]
  ],
  "llm_output": {
    "token_usage": {
      "completion_tokens": 1,
      "prompt_tokens": 31,
      "total_tokens": 32
    },
    "model_name": "gpt-3.5-turbo",
    "system_fingerprint": null
  },
  "run": null
}
[chain/start] [1:chain:RunnableSequence > 9:chain:RunnableParallel<fruit,country> > 11:chain:RunnableSequence > 14:parser:StrOutputParser] Entering Parser run with input:
[inputs]
[chain/end] [1:chain:RunnableSequence > 9:chain:RunnableParallel<fruit,country> > 11:chain:RunnableSequence > 14:parser:StrOutputParser] [1ms] Exiting Parser run with output:
{
  "output": "China"
}
[chain/end] [1:chain:RunnableSequence > 9:chain:RunnableParallel<fruit,country> > 11:chain:RunnableSequence] [2.45s] Exiting Chain run with output:
{
  "output": "China"
}
[chain/end] [1:chain:RunnableSequence > 9:chain:RunnableParallel<fruit,country>] [2.46s] Exiting Chain run with output:
{
  "fruit": "Strawberry.",
  "country": "China"
}
[chain/start] [1:chain:RunnableSequence > 15:prompt:ChatPromptTemplate] Entering Prompt run with input:
{
  "fruit": "Strawberry.",
  "country": "China"
}
[chain/end] [1:chain:RunnableSequence > 15:prompt:ChatPromptTemplate] [1ms] Exiting Prompt run with output:
{
  "lc": 1,
  "type": "constructor",
  "id": [
    "langchain",
    "prompts",
    "chat",
    "ChatPromptValue"
  ],
  "kwargs": {
    "messages": [
      {
        "lc": 1,
        "type": "constructor",
        "id": [
          "langchain",
          "schema",
          "messages",
          "HumanMessage"
        ],
        "kwargs": {
          "content": "What is the color of Strawberry. and the flag of China?",
          "additional_kwargs": {}
        }
      }
    ]
  }
}
[chain/end] [1:chain:RunnableSequence] [6.56s] Exiting Chain run with output:
[outputs]
prompt >>  messages=[HumanMessage(content='What is the color of Strawberry. and the flag of China?')]
[llm/start] [1:llm:ChatOpenAI] Entering LLM run with input:
{
  "prompts": [
    "Human: What is the color of Strawberry. and the flag of China?"
  ]
}
[llm/end] [1:llm:ChatOpenAI] [1.42s] Exiting LLM run with output:
{
  "generations": [
    [
      {
        "text": "The color of a strawberry is typically red. The flag of China is predominantly red with a large yellow star in the upper left corner and four smaller yellow stars surrounding it.",
        "generation_info": {
          "finish_reason": "stop",
          "logprobs": null
        },
        "type": "ChatGeneration",
        "message": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain",
            "schema",
            "messages",
            "AIMessage"
          ],
          "kwargs": {
            "content": "The color of a strawberry is typically red. The flag of China is predominantly red with a large yellow star in the upper left corner and four smaller yellow stars surrounding it.",
            "additional_kwargs": {}
          }
        }
      }
    ]
  ],
  "llm_output": {
    "token_usage": {
      "completion_tokens": 34,
      "prompt_tokens": 20,
      "total_tokens": 54
    },
    "model_name": "gpt-3.5-turbo",
    "system_fingerprint": null
  },
  "run": null
}
response >>  content='The color of a strawberry is typically red. The flag of China is predominantly red with a large yellow star in the upper left corner and four smaller yellow stars surrounding it.'

代码

https://github.com/zgpeace/pets-name-langchain/tree/develop

参考

https://python.langchain.com/docs/expression_language/cookbook/multiple_chains

你可能感兴趣的:(LLM-Large,Language,Models,langchain,chatgpt,LLM,prompt,语言模型)