LangChain是一个模块化设计的框架,用于构建基于大语言模型(LLM)的应用程序。其架构设计遵循"组件化"和"可组合"原则,使开发者能够灵活构建复杂的工作流。以下是LangChain的核心架构详解:
功能:处理与LLM的交互接口
组件构成:
Models:支持多种模型提供商
from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic
# OpenAI模型
openai_llm = ChatOpenAI(model="gpt-4-turbo")
# Anthropic模型
claude_llm = ChatAnthropic(model="claude-3-opus")
Prompts:提示工程管理
from langchain_core.prompts import (
ChatPromptTemplate,
FewShotPromptTemplate
)
# 多角色提示模板
prompt = ChatPromptTemplate.from_messages([
("system", "你是一位{role}"),
("human", "{input}")
])
Output Parsers:输出结构化处理
from langchain_core.output_parsers import (
PydanticOutputParser,
XMLOutputParser
)
class Answer(BaseModel):
answer: str
confidence: float
parser = PydanticOutputParser(pydantic_object=Answer)
功能:实现数据加载、处理与检索
处理流程:
关键组件:
Document Loaders:80+种文档加载器
from langchain_community.document_loaders import (
PyPDFLoader,
SeleniumURLLoader
)
# PDF加载
loader = PyPDFLoader("report.pdf")
pages = loader.load_and_split()
Text Splitters:智能文本分割
from langchain_text_splitters import (
TokenTextSplitter,
SemanticChunker
)
# 语义分块
splitter = SemanticChunker(OpenAIEmbeddings())
chunks = splitter.create_documents([text])
Vector Stores:向量存储方案
from langchain_community.vectorstores import (
FAISS,
Pinecone,
Weaviate
)
# FAISS本地存储
vectorstore = FAISS.from_documents(docs, embeddings)
from langchain_core.runnables import (
RunnableParallel,
RunnablePassthrough
)
# 复杂链示例
chain = (
RunnableParallel({
"context": retriever,
"question": RunnablePassthrough()
})
| prompt
| llm
| parser
)
from langchain.agents import (
Tool,
AgentExecutor,
create_react_agent
)
# 自定义工具
tools = [
Tool(
name="Search",
func=search_tool,
description="用于网络搜索"
)
]
# 创建代理
agent = create_react_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools)
状态管理方案:
from langchain_core.memory import (
ConversationBufferMemory,
ConversationKGMemory
)
# 知识图谱记忆
memory = ConversationKGMemory(llm=llm)
memory.save_context(
{"input": "爱因斯坦的贡献"},
{"output": "提出相对论"}
)
监控与扩展:
from langchain_core.callbacks import (
FileCallbackHandler,
StdOutCallbackHandler
)
# 回调组合
handlers = [FileCallbackHandler("logs.json"), StdOutCallbackHandler()]
chain.invoke({"question": "..."}, {"callbacks": handlers})
from langchain_community.callbacks.manager import (
AsyncCallbackManager,
DistributedCallbackManager
)
# 分布式回调
manager = DistributedCallbackManager(
redis_host="redis-cluster",
callbacks=[...]
)
from langchain_community.tools import (
WikipediaQueryRun,
ArxivQueryRun,
YouTubeSearchTool
)
# 多工具集成
tools = [
WikipediaQueryRun(),
ArxivQueryRun(),
YouTubeSearchTool(),
Tool.from_function(
lambda x: str(calculator.eval(x)),
name="Calculator",
description="数学计算"
)
]
可组合编程范式:
from langchain_core.runnables import (
RunnableLambda,
RunnableMap
)
# LCEL链示例
chain = (
RunnableMap({
"text": lambda x: x["input"],
"length": lambda x: len(x["input"])
})
| RunnableLambda(process_text)
| llm
)
# 混合本地与远程组件
local_chain = load_local_chain()
remote_chain = HTTPChain("https://api.chain.prod")
combined = local_chain | remote_chain
# 生产级配置示例
from langchain.globals import set_llm_cache
from langchain.cache import RedisSemanticCache
set_llm_cache(RedisSemanticCache(
redis_url="redis://cluster",
embedding=OpenAIEmbeddings()
))
production_chain = (
input_formatter
| retriever.with_config(run_name="VectorSearch")
| llm.with_config(
temperature=0.3,
max_tokens=2000
)
| output_parser
).with_retry(stop=3) # 自动重试
组件标准化:
云原生支持:
性能优化:
安全增强:
LangChain架构的核心优势在于:
典型生产系统架构示例: